简体   繁体   中英

Securing internal service communication in a Kubernetes cluster with https and TLS

I'm working on a set of microservices and I need to secure the communication between the individual services (using https + TLS)

The service deployments have a Service object set up with an assigned Cluster IP. kube-dns automatically creates a DNS record with format *.cluster.local when the services are created. The problem is that I'm not allowed to create TLS certificates with a SN containing "local" in my org. So any certificate that I create for the services would end up failing certificate validation because the SN doesn't match the domain name. What I would like to do is to add a CNAME to kube-dns with my own custom domain name (ie servicename.cluster-internal.com) that would return the *.cluster.local domain, which would then resolve to the correct ClusterIP. I would create the certificates with the SN set to my custom domains so that certificate validation would not fail when the services try to handshake and set up a secure connection.

I'm open to other ways of doing this, but I would prefer not to take dependencies on other types of DNS providers or to have to write my own.

Before we solved the issue the correct way, we disabled certificate validation in the services that were running in the cluster. I don't recommend this approach, but it's an easier way to unblock.

We solved this the correct way by customizing our DNS. Since we deploy clusters with ACS-Engine, it was only a matter of redeploying our clusters with some updated options in the cluster definition .

See below:

"kubernetesConfig": {
"kubeletConfig": {
    "--cluster-domain": "domain.you.own"
  }
}

This gave us the ability to cut certs in "domain.you.own" and turn certificate validation back on.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM