简体   繁体   中英

Kubernetes loadbalancer stops serving traffic if using local traffic policy

Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs . However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?

Load Balancer Service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: loadbalancer
    role: loadbalancer-service
  name: lb-test
  namespace: default
spec:
  clusterIP: 10.3.249.57
  externalTrafficPolicy: Local
  ports:
  - name: example service
    nodePort: 30581
    port: 8000
    protocol: TCP
    targetPort: 8000
  selector:
    app: loadbalancer-example
    role: example
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: *example.ip*

Could be several things. A couple of suggestions:

  1. Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
    • Try running a sniffer on your pod see if you are getting packets from the external source.
    • Try checking at logs of your application.
  2. Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
    • Check the instance port is listening. (probably not if your health check is failing)

磅

Hope it helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM