简体   繁体   中英

Kubernetes Pods accessible from outside cluster

I have two Kubernetes clusters. I have run an Nginx server pod on one cluster. Its pod IP is 10.40.0.1. When I ping 10.40.0.1 from this cluster nodes it can ping easily from any node of this cluster.

when I ping from the second cluster node to the first cluster pod it is not working. How should I make a pod so, that it is accessible from the second cluster node as well?

I have deployed Nginx server with the below YAML file.

apiVersion: v1     

kind: Pod

metadata:          
  name: Serverpod  
spec:               
  containers:
  - name: Nginx  
    image: nginx:latest 
    ports:               
    - containerPort: 80
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - Node1

I have tried the hostnetwork: true but it is not working well.

You have posted a pod spec with nodeAffinity in your question which your pod will always run on the Node1 .

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - Node1

If you set hostNetwork: true , you can access the pod as curl <IP of Node1 or just Node1 if the name is resolvable to IP> . You can also expose the pod via kubectl expose pod serverpod --type NodePort --name serverpod --port 31000 , in this case you can curl <any node IP:31000> and the request will route to your pod by k8s network proxy . These methods work out of the box that do not require you to install any load balancer, ingress controller or service mesh.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM