简体   繁体   中英

Prevent public IP address binding on Kubernetes single master/node set-up

I'm following the instructions here to spin up a single node master kubernetes install. And then planning to make a website hosted within it available via an nginx ingress controller hosted directly on the internet (on a physical server, not GCE, AWS or other cloud).

Set-up works as expected and I can hit the load balancer and flow through the ingress to the target echoheaders instance, get my output and everything looks great. Good stuff.

The trouble comes when I portscan the server's public internet IP and see all these open ports besides the ingress port (80).

 Open TCP Port:     80          http
 Open TCP Port:     4194
 Open TCP Port:     6443        
 Open TCP Port:     8081        
 Open TCP Port:     10250
 Open TCP Port:     10251
 Open TCP Port:     10252       
 Open TCP Port:     10255
 Open TCP Port:     38654
 Open TCP Port:     38700
 Open TCP Port:     39055
 Open TCP Port:     39056
 Open TCP Port:     44667

All of the extra ports correspond to cadvisor, skydns and the various echo headers and nginx instances, which for security reasons should not be bound to the public IP address of the server. All of these are being injected into the host's KUBE-PORTALS-HOST iptable with bindings to the server's public IP by kube-proxy.

How can I get hypercube to tell kube-proxy to only bind to docker IP (172.x) or private cluster IP (10.x) addresses?

您应该能够在kube-proxy( http://kubernetes.io/docs/admin/kube-proxy/ )上设置绑定地址:

--bind-address=0.0.0.0: The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM