简体   繁体   English

Kubernetes pod 集群 ip 没有响应?

[英]Kubernetes pod cluster ip not responding?

I have two backend services deployed on the Google cloud Kubernetes Engine .我在Google cloud Kubernetes Engine上部署了两个后端服务。

a) Backend Service a) 后端服务

b) Admin portal which needed to connect with the Backend Service b) 需要连接后端服务的管理门户

Everything is available in one cluster.一切都在一个集群中可用。

As in Workload / Pods ,Workload / Pods

I have three deployments running whereas fitme:9000 is a backend and nginx-1:9000 is an admin portal service我正在运行三个部署,而fitme:9000是后端, nginx-1:9000是管理门户服务在此处输入图片说明

whereas in Services I have而在Services我有在此处输入图片说明

Visualization可视化

在此处输入图片说明

Explanation解释

1. D1 (fitme), D2 (mongo-mongodb), D3 (nginx-1) are three deployments

2. E1D1 (fitme-service), E2D1 (fitme-jr29g), E1D2 (mongo-mongodb), E2D2 (mongo-mongodb-rcwwc) and E1D3 (nginx-1-service) are Services

3. `E1D1, E1D2 and E1D3` are exposed over `Load Balancer` whereas `E2D1 , E2D2` are exposed over `Cluster IP`.

The reason behind it:背后的原因:

D1 needs to access D2 (internally) -> This is perfectly working fine. D1需要访问D2 (内部)-> 这完全正常。 I am using E2D2 exposed service (cluster IP) to access the D2 deployment inside from D1我正在使用E2D2公开服务(集群 IP)从D1访问内部的D2部署

Now, D3 needs access to D1 deployment.现在, D3需要访问D1部署。 So, I exposed D1 as an E2D1 service and trying to access it internally by generated Cluster IP of E2D1 but it's giving me request time out .因此,我将D1作为E2D1服务公开,并尝试通过生成的E2D1 Cluster IP在内部访问它,但它给了我request time out

YAML for fitme-jr29g service用于fitme-jr29g服务的 YAML

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-02T11:18:55Z"
  generateName: fitme-
  labels:
    app: fitme
  name: fitme-jr29g
  namespace: default
  resourceVersion: "486673"
  selfLink: /api/v1/namespaces/default/services/fitme-8t7rl
  uid: 875045eb-14f5-11ea-823c-42010a8e0047
spec:
  clusterIP: 10.35.240.95
  ports:
  - port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: fitme
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

YAML for nginx-1-service service nginx-1-service服务的 YAML

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-02T11:30:10Z"
  labels:
    app: admin
  name: nginx-1-service
  namespace: default
  resourceVersion: "489972"
  selfLink: /api/v1/namespaces/default/services/admin-service
  uid: 195b462e-14f7-11ea-823c-42010a8e0047
spec:
  clusterIP: 10.35.250.90
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30628
    port: 8080
    protocol: TCP
    targetPort: 9000
  selector:
    app: admin
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 35.227.26.101

YAML for nginx-1 deployment用于 nginx-1 部署的 YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-12-02T11:24:09Z"
  generation: 2
  labels:
    app: admin
  name: admin
  namespace: default
  resourceVersion: "489624"
  selfLink: /apis/apps/v1/namespaces/default/deployments/admin
  uid: 426792e6-14f6-11ea-823c-42010a8e0047
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: admin
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: admin
    spec:
      containers:
      - image: gcr.io/docker-226818/admin@sha256:602fe6b7e43d53251eebe2f29968bebbd756336c809cb1cd43787027537a5c8b
        imagePullPolicy: IfNotPresent
        name: admin-sha256
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-12-02T11:24:18Z"
    lastUpdateTime: "2019-12-02T11:24:18Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-12-02T11:24:09Z"
    lastUpdateTime: "2019-12-02T11:24:18Z"
    message: ReplicaSet "admin-8d55dfbb6" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

YAML for fitme-service用于fitme-service YAML

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-02T13:38:21Z"
  generateName: fitme-
  labels:
    app: fitme
  name: fitme-service
  namespace: default
  resourceVersion: "525173"
  selfLink: /api/v1/namespaces/default/services/drogo-mzcgr
  uid: 01e8fc39-1509-11ea-823c-42010a8e0047
spec:
  clusterIP: 10.35.240.74
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31016
    port: 80
    protocol: TCP
    targetPort: 9000
  selector:
    app: fitme
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 35.236.110.230

YAML for fitme deployment用于 fitme 部署的 YAML

 apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-12-02T13:34:54Z"
  generation: 2
  labels:
    app: fitme
  name: fitme
  namespace: default
  resourceVersion: "525571"
  selfLink: /apis/apps/v1/namespaces/default/deployments/drogo
  uid: 865a5a8a-1508-11ea-823c-42010a8e0047
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: drogo
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: fitme
    spec:
      containers:
      - image: gcr.io/fitme-226818/drogo@sha256:ab49a4b12e7a14f9428a5720bbfd1808eb9667855cb874e973c386a4e9b59d40
        imagePullPolicy: IfNotPresent
        name: fitme-sha256
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-12-02T13:34:57Z"
    lastUpdateTime: "2019-12-02T13:34:57Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-12-02T13:34:54Z"
    lastUpdateTime: "2019-12-02T13:34:57Z"
    message: ReplicaSet "drogo-5c7f449668" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

I am accessing fitme-jr29g by putting 10.35.240.95:9000 ip address in我通过10.35.240.95:9000 ip 地址来访问fitme-jr29g
nginx-1 deployment container. nginx-1 deployment容器。

The deployment object can , and often, should have network properties to expose the applications within the pods . deployment对象可以并且通常应该具有网络属性来公开 pod 中的应用程序

Pods are networking cappable objects, with virtual ethernet interfaces , needed to receive incoming traffic. Pod 是具有网络功能的对象,具有接收传入流量所需的虚拟以太网接口

On the other hand, services are completely network oriented objects, meant mostly to relay network traffic into the pods.另一方面, services是完全面向网络的对象,主要用于将网络流量中继到 Pod 中。

You can think of that as pods (grouped in deployments) as backend and services as load balancers.您可以将其视为 Pod(在部署中分组)作为后端,将服务视为负载均衡器。 At the end, both need network capabilities.最后,两者都需要网络能力。

In your scenario, I'm not sure how are you exposing your deployment via load balancer since its pods doesn't seem to have any open ports.在您的场景中,我不确定您如何通过load balancer公开您的部署,因为它的 pod 似乎没有任何开放端口。

Since the services exposing your pods are targeting port 9000, you can add it to the pod template in your deployment:由于暴露 Pod 的服务面向端口 9000,因此您可以将其添加到部署中的 Pod 模板中:

spec:
  containers:
  - image: gcr.io/fitme-xxxxxxx
    name: fitme-sha256
    ports:
    - containerPort: 9000

Be sure that it matches the port where your container is actually receiving the incoming requests.确保它与您的容器实际接收传入请求的端口相匹配。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM