简体   繁体   中英

kubernetes: Service endpoints available from within cluster, not outside

I have a service ( LoadBalancer ) definition in a k8s cluster, that is exposing 80 and 443 ports.

In the k8s dashboard, it indicates that these are the external endpoints:

(the k8s has been deployed using rancher for what that matters)

<some_rancher_agent_public_ip>:80
<some_rancher_agent_public_ip>:443

Here comes the weird (?) part:

From a busybox pod spawned within the cluster:

wget <some_rancher_agent_public_ip>:80
wget <some_rancher_agent_public_ip>:443

both succeed (ie they fetch the index.html file)

From outside the cluster:

Connecting to <some_rancher_agent_public_ip>:80... connected.
HTTP request sent, awaiting response... 

2018-01-05 17:42:51 ERROR 502: Bad Gateway.

I am assuming this is not a security groups issue given that:

  • it does connect to <some_rancher_agent_public_ip>:80
  • I have also tested this by allowing all traffic from all sources in the sg the instance with <some_rancher_agent_public_ip> belongs to

In addition, nmap -ing the above public ip, shows 80 and 443 in open state.

Any suggestions?

update :

$ kubectl describe svc ui
Name:                     ui
Namespace:                default
Labels:                   <none>
Annotations:              service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:eu-west-1:somecertid
Selector:                 els-pod=ui
Type:                     LoadBalancer
IP:                       10.43.74.106
LoadBalancer Ingress:     <some_rancher_agent_public_ip>, <some_rancher_agent_public_ip>
Port:                     http  80/TCP
TargetPort:               %!d(string=ui-port)/TCP
NodePort:                 http  30854/TCP
Endpoints:                10.42.179.14:80
Port:                     https  443/TCP
TargetPort:               %!d(string=ui-port)/TCP
NodePort:                 https  31404/TCP
Endpoints:                10.42.179.14:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

and here is the respective pod description:

kubectl describe pod <the_pod_id>
Name:           <pod_id>
Namespace:      default
Node:           ran-agnt-02/<some_rancher_agent_public_ip>
Start Time:     Fri, 29 Dec 2017 16:48:42 +0200
Labels:         els-pod=ui
                pod-template-hash=375086521
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ui-deployment-7c94db965","uid":"5cea65ea-eca7-11e7-b8e0-0203f78b...
Status:         Running
IP:             10.42.179.14
Created By:     ReplicaSet/ui-deployment-7c94db965
Controlled By:  ReplicaSet/ui-deployment-7c94db965
Containers:
  ui:
    Container ID:   docker://some-container-id
    Image:          docker-registry/imagename
    Image ID:       docker-pullable://docker-registry/imagename@sha256:some-sha
    Port:           80/TCP
    State:          Running
      Started:      Fri, 05 Jan 2018 16:24:56 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 05 Jan 2018 16:23:21 +0200
      Finished:     Fri, 05 Jan 2018 16:23:31 +0200
    Ready:          True
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8g7bv (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-8g7bv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8g7bv
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Kubernetes provides different ways of exposing pods to outside the cluster, mainly Services and Ingress . I'll focus on Services since you are having issues with that.

There are different Services types, among those:

  • ClusterIP : default type. Choosing this type means that your service gets an stable IP which is reachable only from inside of the cluster. Not relevant here.
  • NodePort : Besides having a cluster-internal IP, expose the service on a random port on each node of the cluster (the same port on each node) . You'll be able to contact the service on any NodeIP:NodePort address. That's why you can contact your rancher_agent_public_ip:NodePort from outside the cluster.
  • LoadBalancer : Besides having a cluster-internal IP and exposing service on a NodePort, ask the cloud provider for a load balancer that exposes the service externally using a cloud provider's load balancer.

Creating a Service of type LoadBalancer makes it NodePort as well. That's why you can reach rancher_agent_public_ip:30854 .

I have no experience on rancher, but it seems that creating a LoadBalancer Service deploys a HAProxy to act as a Load balancer. That HAProxy that was created by Rancher needs a public IP thats reachable from outside the cluster, and a port that will redirect requests to the NodePort .

But in your service, the IP looks like an internal IP 10.43.74.106. That IP won't be reachable from outside the cluster. You need a public IP.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM