When deploying an EKS cluster, the best practice is to deploy the managed control plane in private su.nets. In terms of accessibility, the defalt option is public
cluster, meaning that I can access it locally with kubectl
tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private su.nets with no inbound traffic? As per the documentation , AWS creates a managed endpoint that can access the cluster from within the AWS.network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node
)?
The type of EKS.networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kube.netes API requests (kubectl commands) have to originate from within the VPC (public or private su.nets). If you are doing this as a personal project, then you can do the following:
aws eks --region <region> update-kubeconfig --name <name-of-your-cluster>
to update your kubeconfig and then proceed to run kubectl commands.Sidenote: If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.