简体   繁体   中英

Assign external IP to Kubernetes nodes (AWS EKS)

I have a UDP service I need to expose to the internet from an AWS EKS cluster. AWS load balancers (classic or NLB) don't do UDP, so I'd like to use a NodePort with Route53's multi-value to get UDP round robin load balancing to my nodes.

My nodes on AWS EKS don't have an ExternalIP assigned to them. While the EC2 instances the nodes run on have public IPs, these haven't been assigned to the nodes when the cluster was created.

How can I assign the EC2 public IPs to my k8s nodes?

NAME                                          STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                                         KERNEL-VERSION               CONTAINER-RUNTIME
x.us-west-2.compute.internal   Ready     <none>    7d        v1.10.3   <none>        Amazon Linux 2 (2017.12) LTS Release Candidate   4.14.42-61.37.amzn2.x86_64   docker://17.6.2
x.us-west-2.compute.internal      Ready     <none>    7d        v1.10.3   <none>        Amazon Linux 2 (2017.12) LTS Release Candidate   4.14.42-61.37.amzn2.x86_64   docker://17.6.2
x.us-west-2.compute.internal   Ready     <none>    7d        v1.10.3   <none>        Amazon Linux 2 (2017.12) LTS Release Candidate   4.14.42-61.37.amzn2.x86_64   docker://17.6.2

I'm currently testing against a HTTP service for convenience, and here's what my test service looks like:

apiVersion: v1
kind: Service
metadata:
  name: backend-api
  labels:
    app: backend-api
spec:
  selector:
    app: backend-api
  type: NodePort
  ports:
  - name: back-http
    port: 81
    targetPort: 8000
    protocol: TCP
  externalIPs:
  - x.x.x.x
  - x.x.x.x
  - x.x.x.x

For this example, my curl requests never hit the HTTP service running on the nodes. My hunch is that is because the nodes don't have externalIP s.

I haven't tried HostPort or UDP, but I've had success with public NodePorts.

As long as the instance has a public IP, its security policy opens the ports, there's no OS firewall, and you don't have incompatible NetworkPolicies, then HostPort will just work. NodePort forwards the OS's port into Kubernetes. ExternalIP and other internal Kubernetes settings are irrelevant.

You can use the External IP controller for assigning IPs to the nodes. It is designed to work on the bare metal cluster, but I think it should work in your case also.

External IP Controller is a k8s application which is deployed on top of k8s cluster and which configures External IPs on k8s worker node(s) to provide IP connectivity.

Description:

External IP controller kubernetes application is running on one of the nodes (replicas=1).

  • On start it pulls information about services from kube-api and brings up all External IPs on the specified interface (eth0 in our example above).
  • It watches kube-api for updates in services with External IPs and:
    • When new External IPs appear it brings them up.
    • When service is removed it removes appropriate External IPs from the interface.
  • Kubernetes provides fail-over for External IP controller. Since we have replicas set to 1, then we'll have only one instance running in a cluster to avoid IPs duplication. And when there's a problem with k8s node, External IP controller will be spawned on a new k8s worker node and bring External IPs up on that node.

Check out the Demo to see how it works.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM