简体   繁体   中英

How to find out why Kubernetes dns-controller is not creating records for our own/company domain?

Our KOPS based Kubernetes cluster in AWS stopped creating external dns records in Route53 like: service-name-svc.testing.companydomain.com. Is there any way to check what flag is set for the dns-controller working within the cluster? Any other suggestions on how to troubleshoot it are welcomed!

With this in mind, the records like: service-name-svc.namespace.svc.cluster.local resolves fine.

Server:    100.32.0.10
Address 1: 100.32.0.10 kube-dns.kube-system.svc.cluster.local

Name:      service-name-svc.namespace.svc.cluster.local

Address 1: 100.32.12.141 service-name-svc.namespace.svc.cluster.local

The typical way of creating route53 records in a kOps cluster is to deploy external-dns to the control plane nodes.

dns-controller may create route53 records too, and it does so for kube-apiserver and other system components. However, to use this for other nodes/services, you need to add specific annotations. See the dns-controller documentation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM