简体   繁体   中英

Kubernetes exposed service on EC2 not accessible

I have Kubernetes Master and Minions running on EC2 instances and was able to successfully deploy an example app with below commands

kubectl run hello-world --image=gcr.io/google_containers/echoserver:1.4 --port=8080

kubectl expose deployment hello-world --type=NodePort

Which is now available externally from port 30013:

NAME           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-world    10.43.95.16     <nodes>       8080:30013/TCP   1h

I'm now trying to access this by visiting EC2 instance private IP of this Kubernetes Minion node and its port number as 30013 but is not able to connect at all.

I've checked security group of AWS and this port is open and is attached to the EC2 instance. I cannot think of anything else that would stop accessing the application.

Is there any known issues with AWS networking with Kubernetes exposed services?

It should work (and it works on my cluster on AWS). Are you sure you are using the IP address of the eth0 interface and nop crb0 or something else? EC2 instances just have one interface and the public address is mapped, so from inside the EC2 is not much difference.

Also, you should be able to contact 10.43.95.16 on port 8080 or just use the DNS name. If you want to connect to other services from a k8s app, you should use that (no node crash will affect the communication, etc.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM