简体   繁体   中英

GCP VM in same region not able to Ping Internal HTTPS Load Balancer IP created with GKE internal LB ingress

I have a GKE cluster deployed with version 1.20.10-gke.1600. I have created the internal ingress with GCE and I got the internal ip assigned to my internal ingress. However, I am not able to ping to this internal ingress IP from the VM in same region and network. Ping to external ingress is working fine. I read the document below and it says ping to internal TCP/UDP is not possible as it is not deployed as network device. However, I do not see anything regarding internal HTTPS load balancer.

https://cloud.google.com/load-balancing/docs/internal/setting-up-internal#no-ping-lb

ping 10.128.0.174 Pinging 10.128.0.174 with 32 bytes of data: Request timed out. Ping statistics for 10.128.0.174: Packets: Sent = 1, Received = 0, Lost = 1 (100% loss),

question is why am I not able to ping to my internal LB ingress IP. I am trying ping from the VM in same region and network. Curl to internal ingress IP is working but not ping..

Cluster IP is just (as Gari Singh wrote ) a virtual appliance that won't respont to ping . It's intended behavior.

Documentation about pinging LB's internall address you linked clarly says:

This test demonstrates an expected behavior: You cannot ping the IP address of the load balancer. This is because internal TCP/UDP load balancers are implemented in virtual network programming — they are not separate devices.

and then explains why:

nternal TCP/UDP Load Balancing is implemented using virtual network programming and VM configuration in the guest OS. On Linux VMs, the Linux Guest Environment performs the local configuration by installing a route in the guest OS routing table. Because of this local route, traffic to the IP address of the load balancer stays on the load balanced VM itself. (This local route is different from the routes in the VPC network.)

So - if for example you're trying to set up some sort of your custom health check make sure, take also into account that "pinging" LB's internal address from inside the cluster is also unreliable:

Don't rely on making requests to an internal TCP/UDP load balancer from a VM being load balanced (in the backend service for that load balancer). A request is always sent to the VM that makes the request, and health check information is ignored. Further, the backend can respond to traffic sent using protocols and destination ports other than those configured on the load balancer's internal forwarding rule.

Even more:

This default behavior doesn't apply when the backend VM that sends the request has an --next-hop-ilb route with a next hop destination that is its own load balanced IP address. When the VM targets the IP address specified in the route, the request can be answered by another load balanced VM.

You can, for example, create a destination route of 192.168.1.0/24 with a --next-hop-ilb of 10.20.1.1.

A VM that is behind the load balancer can then target 192.168.1.1. Because the address isn't in the local routing table, it is sent out the VM for Google Cloud routes to be applicable. Assuming no other routes are applicable with higher priority, the --next-hop-ilb route is chosen.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM