简体   繁体   English

coredns 没有部署在新的 EKS 集群中?

[英]coredns not deploying in new EKS cluster?

I'm deploying an AWS EKS cluster in Fargate (no EC2 nodes) using an existing VPC with both public and private su.nets, and am able to create the cluster successfully with eksctl .我正在 Fargate 中部署一个 AWS EKS 集群(无 EC2 节点),使用现有的 VPC 以及公共和私有 su.net,并且能够使用eksctl成功创建集群。 However, I see that the coredns Deployment is stuck at 0/2 Pods ready in the EKS console.但是,我看到coredns部署卡在 EKS 控制台中的0/2 Pods ready处。 I was reading that I need to enable port 53 in my security group rules, and I have.我读到我需要在我的安全组规则中启用端口 53,我已经启用了。 Here's my config file.这是我的配置文件。

$ eksctl create cluster -f eks-sandbox-cluster.yaml
eks-sandbox-cluster.yaml
------------------------
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5

metadata:
  name: sandbox
  region: us-east-1
  version: "1.18"

# The VPC and subnets are for the data plane, where the pods will
# ultimately be deployed.
vpc:
  id: "vpc-12345678"
  clusterEndpoints:
    privateAccess: true
    publicAccess: false
  subnets:
  # us-east-1a is full
    private:
      us-east-1b:
        id: "subnet-xxxxxxxx"
      us-east-1c:
        id: "subnet-yyyyyyy"
    public:
      us-east-1b:
        id: "subnet-aaaaaaaa"
      us-east-1c:
        id: "subnet-bbbbbbbb"

fargateProfiles:
  - name: fp-default
    selectors:
      - namespace: default
  - name: fp-kube
      - namespace: kube-system
  - name: fp-myapps
    selectors:
      - namespace: myapp
        labels:
          app: myapp

cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"]

Why is coredns Deployment not coming up?为什么没有出现 coredns 部署?

I do see this in the kube-scheduler CloudWatch logs.我确实在kube-scheduler CloudWatch 日志中看到了这一点。

I0216 16:46:43.841076       1 factory.go:459] Unable to schedule kube-system/coredns-c79dcb98c-9pfrz: no nodes are registered to the cluster; waiting

I think because of this I can't talk to my cluster either via kubectl ?我认为因此我无法通过kubectl与我的集群通信?

$ kubectl get pods
Unable to connect to the server: dial tcp 10.23.x.x:443: i/o timeout

When I deployed the EKS cluster using a config file, using our existing VPC with private only endpoits, the coredns Deployment was set to start on EC2 nodes.当我使用配置文件部署 EKS 集群时,使用我们现有的 VPC 和仅私有端点,coredns 部署设置为在 EC2 节点上启动。 Of course with Fargate there are no EC2 nodes.当然,Fargate 没有 EC2 节点。 I had to edit the coredns Deployment to use fargate and restart the Deployment.我必须编辑coredns部署以使用fargate并重新启动部署。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM