简体   繁体   中英

Asp.Net-Core Application in docker over https

We've recently run into the requirement to serve our containerized application over https in docker.

Following Microsoft's guide I was able to reach the container application from my host machine and everything worked fine, including SSL.

The problem happens when trying to communicate with the application from inside the docker environment. The same container/other containers run into the issue of not being able to verify the certificate when trying to communicate with the application. This behavior can be observed in the linked example application from Microsoft as well. Trying to curl the website from within the container ( curl https://localhost ) always yields: curl: (60) SSL certificate problem: unable to get local issuer certificate This isn't a problem specific to curl, as calls utilizing HttpClient also return SSL related errors.

Figuring it would be like on windows, where you simply have to add the Self-Signed .pfx to your cert-store, I created a self-signed certificate using New-SelfSignedCertificate -DnsName "localhost", "dockerDnsName", "127.0.0.1" -CertStoreLocation "cert:\\LocalMachine\\My" -NotAfter (Get-Date("2050-01-01")) I need both the localhost and dockerDnsName in my cert's SubjectAlternateName because containers from within the docker network will talk to the container using that name. I then added the certificate to my host's trusted root CAs.

I followed Microsoft's guide of adding the pfx to the container, setting the environment variables for Kestrel to the relevant values ( ASPNETCORE_Kestrel__Certificates__Default__Path and ASPNETCORE_Kestrel__Certificates__Default__Password ) and booted the container.

Accessing the container via the browser from the host still worked. Accessing the website from within the container yielded the SSL error again. I then converted the .pfx to .crt inside the container via openssl pkcs12 -in myRootCA.pfx -clcerts -nokeys -out myRootCA.crt , added the resulting .crt to /usr/local/share/ca-certificates/ and ran update-ca-certificates . To my understanding that should have fixed it, but I still get the same SSL related errors.

Edit: No idea if it makes any difference, but this particular application is served (docker-)internally at port 5000 and the port-mapping to the host is 5000:5000.

After trying around a bunch more, I ended up re-doing the whole certification process again. Only this time, I went with openssl all the way.

I'll briefly outline my steps for anyone facing the same problem:

I followed this post to the letter.

This way I've set up a CA certificate that I can trust in both Windows and Linux (Docker) environments, called cacert.crt . I've then created a certificate signing request as outlined in the linked answer, used the CA certificate to sign it and obtain a valid SSL certificate, called servercert.pfx . The guide only specified .pem files, but converting between the formats using the openssl cli tool is really easy.

I've then checked in both into my source control and edited my dockerfile and compose file.

I then installed the cacert.crt into my local machine's cert store under the trusted root authorities category.

In the dockerfile I put the following right before the ENTRYPOINT :

COPY ["servercert.pfx", "/https/servercert.pfx"]
COPY ["cacert.crt", "/usr/local/share/ca-certificates/cacert.crt"]
RUN update-ca-certificates

In the docker-compose.yml I put the following under environment :

 - ASPNETCORE_URLS=https://0.0.0.0:5000
 - ASPNETCORE_HTTPS_PORT=5000
 - ASPNETCORE_Kestrel__Certificates__Default__Password={YourPw}
 - ASPNETCORE_Kestrel__Certificates__Default__Path=/https/servercert.pfx

The actual port number as well as the value for the password have to be adapted as needed, obviously.

This solved all my problems. All browsers are now happily navigating with no SSL errors to https://localhost:5000 which is serving from within docker. I can also connect to the docker container and run $ curl https://localhost:5000 and $ curl https://dockerDnsName:5000 with no problem. This also fixed all problems with HttpClient.

Some additional info to Jejuni's answer .

Converted .pem file into .pfx with

sudo openssl pkcs12 -export -out servercert.pfx -inkey serverkey.pem -in servercert.pem

Also had to edit its availability

sudo chmod +r servercert.pfx

On Arch Linux, I added the cacert to trusted sources

sudo trust anchor --store cacert.pem

Finally, after several misguided guides across the internet, HTTPS on localhost is working like it should.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM