简体   繁体   中英

GCP Kubernetes: Ingress and external load balancer with IAP lots of open ports scanning nmap

I have a k8s cluster running a Service behind an Ingress with an external HTTPS load balancer and I have Identity-aware proxy protecting my system. The ingress has a public IP and when I scan it with nmap I see the following open ports:

PORT      STATE SERVICE
43/tcp    open  whois
53/tcp    open  domain
80/tcp    open  http
83/tcp    open  mit-ml-dev
84/tcp    open  ctf
85/tcp    open  mit-ml-dev
89/tcp    open  su-mit-tg
110/tcp   open  pop3
143/tcp   open  imap
443/tcp   open  https
465/tcp   open  smtps
587/tcp   open  submission
700/tcp   open  epp
993/tcp   open  imaps
995/tcp   open  pop3s
1084/tcp  open  ansoft-lm-2
1085/tcp  open  webobjects
1089/tcp  open  ff-annunc
1443/tcp  open  ies-lm
1935/tcp  open  rtmp
3389/tcp  open  ms-wbt-server
5222/tcp  open  xmpp-client
5432/tcp  open  postgresql
5900/tcp  open  vnc
5901/tcp  open  vnc-1
5999/tcp  open  ncd-conf
8080/tcp  open  http-proxy
8081/tcp  open  blackice-icecap
8085/tcp  open  unknown
8086/tcp  open  d-s-n
8088/tcp  open  radan-http
8089/tcp  open  unknown
8090/tcp  open  opsmessaging
8099/tcp  open  unknown
9100/tcp  open  jetdirect
9200/tcp  open  wap-wsp
20000/tcp open  dnp
30000/tcp open  ndmps

My question is why are all these ports open, is it open from the IAP and if so is this why I'm able to scan what seems to be the Ingress IP without authentication, and ultimately can I close all but the HTTP/S ports for security? If it is the IAP, perhaps these need to be open to forward different traffic for different services that MAY be available but that are not in my cluster; does that explain this?

Any hints would be lovely, I've RTFMed and everything about the Ingress seems to point to it only accepting HTTP/S traffic and forwarding to the Service/Deployment. Is this IAP that is leaving these ports open or is it truly on Ingress? It is the IP address associated with the Ingress. Do I need to add a FrontendConfig to my cluster to configure Ingress to have these ports closed?

Thanks in advance!

I got a response from the wonderful support team at Google Cloud Platform. Thank you Google. They confirmed my assumption that these ports are open for a variety of potential services but our configuration only allows what we have requested to our backend. Leaving this in stackoverflow in case any others need this info.

Clients communicate with a Google Front End (GFE) using your Kubernetes Load Balancer's external IP address and the GFE communicates with your backend services using the internal IP address. The GFE is actually forwarding the traffic to the backend instances [ 1 ]. Each GFE is actually serving content as a proxy and is not part of your configuration [ 2 ].

Each GFE serves traffic for many customers as part of its overall security design [ 3 ] and the external IP address for your Kubernetes load balances is programmed on a number of shared GFE servers worldwide. Because the GFE is not unique to your or your load balancer's configuration, it also accepts traffic on other TCP ports. However, incoming traffic to the GFE on other ports is NOT sent to your backends. This way, the GFE secures your instances by only acting on requests to ports you've configured - even if it's listening to more.

For that reason, you see more ports open than expected.

You can read more about this behavior here [ 4 ].

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM