简体   繁体   中英

Not able to access frontend pod in a kubernetes cluster

I started few microservices in a k8s cluster like eureka service discovery(using service object) server and few client microservice which are registered with eureka server. Till here everything was fine eureka server is up and running and I am able to access it using the node IP. But after that I was asked to run one more service and that was delivered by my frontend team as DockerFile. Below is the content of DockerFile

### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
 
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
COPY --from=build /usr/src/app/dist/login /usr/share/nginx/html

I created a helm chart using (helm create UI) and changed the image accordingly. After that I installed the application using helm install command & I created a service object of type NodePort also. But the problem is I am not able to access the service or application.

But, when I start the application using docker run command as used by frontend team to test it locally

docker run --name userinterface -d -p 8585:80 userinterface

I am able to access the service from k8s cluster using URL http://localhost:8585.

Which obviously we don't want in production and for that I created a helm chart and service object accordingly.

kubectl get pods
ui-comp-5458cd5654-wpbvw             1/1     Running   0          35m

kubectl get svc
userinterface              NodePort       10.98.75.125     <none>        80:30003/TCP                    35m 
discovery-server    NodePort       10.97.34.27      <none>        80:30005/TCP                    7d18h

But while to trying access using nodeIP, I am not getting any o/p

curl http://<node_ip>:30003

I tried to create and install this frontend service exactly the same way the eureka server is running.

I am not sure, what else I am missing here? Any suggestion please.

Adding deployment.yaml and service.yaml for the same:

service.yaml

controller-1:~$ kubectl get svc ui-comp -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: ui-comp
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2021-01-29T07:56:19Z"
  labels:
    app: ui-comp
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app: {}
          f:app.kubernetes.io/managed-by: {}
      f:spec:
        f:externalTrafficPolicy: {}
        f:ports:
          .: {}
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:nodePort: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector:
          .: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/name: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: Go-http-client
    operation: Update
    time: "2021-01-29T07:56:19Z"
  name: ui-comp
  namespace: default
  resourceVersion: "31111377"
  selfLink: /api/v1/namespaces/default/services/ui-comp
  uid: 20428f83-92f3-4c84-b573-cb124d2efb39
spec:
  clusterIP: 10.98.75.125
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30003
    port: 80
    protocol: TCP
    targetPort: 8585
  selector:
    app.kubernetes.io/instance: ui-comp
    app.kubernetes.io/name: ui-comp
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
controller-1:~$

deployment.yaml

controller-1:~$ kubectl get deploy ui-comp -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: ui-comp
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2021-01-29T07:56:19Z"
  generation: 1
  labels:
    app: ui-comp
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app: {}
          f:app.kubernetes.io/managed-by: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector:
          f:matchLabels:
            .: {}
            f:app: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"ui-comp"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":8585,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                f:resources: {}
                f:securityContext: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:imagePullSecrets:
              .: {}
              k:{"name":"regcred"}:
                .: {}
                f:name: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:serviceAccount: {}
            f:serviceAccountName: {}
            f:terminationGracePeriodSeconds: {}
    manager: Go-http-client
    operation: Update
    time: "2021-01-29T07:56:19Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-01-29T07:56:22Z"
  name: ui-comp
  namespace: default
  resourceVersion: "31111428"
  selfLink: /apis/apps/v1/namespaces/default/deployments/ui-comp
  uid: 2683b379-e3f3-4908-8db7-4a0617f40c0b
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ui-comp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ui-comp
    spec:
      containers:
      - image: neerajkp/sotool:ui
        imagePullPolicy: Always
        name: ui-comp
        ports:
        - containerPort: 8585
          name: http
          protocol: TCP
        resources: {}
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: regcred
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ui-comp
      serviceAccountName: ui-comp
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-01-29T07:56:22Z"
    lastUpdateTime: "2021-01-29T07:56:22Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-01-29T07:56:19Z"
    lastUpdateTime: "2021-01-29T07:56:22Z"
    message: ReplicaSet "ui-comp-5458cd5654" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
controller-1:~$

After correction.

controller-1:~/neeraj/k8s/charts$ kubectl describe svc ui-comp
Name:                     ui-comp
Namespace:                default
Labels:                   app=ui-comp
                          app.kubernetes.io/managed-by=Helm
Annotations:              meta.helm.sh/release-name: ui-comp
                          meta.helm.sh/release-namespace: default
Selector:                 app=ui-comp
Type:                     NodePort
IP:                       10.98.253.188
Port:                     <unset>  80/TCP
TargetPort:               8585/TCP
NodePort:                 <unset>  30003/TCP
Endpoints:                10.244.166.184:8585
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
controller-1:~/neeraj/k8s/charts$ curl http://10.207.5.1:30003
curl: (7) Failed connect to 10.207.5.1:30003; Connection refused
controller-1:~/neeraj/k8s/charts$

Your service is not selecting your backend pods. You can confirm this by running kubectl describe svc ui-comp and checking for Endpoints . You would see there are no endpoints.

Your pods have the label app: ui-comp but your service is trying to selects pods with labels app.kubernetes.io/instance: ui-comp and app.kubernetes.io/name: ui-comp

Service selector labels:

selector:
    app.kubernetes.io/instance: ui-comp
    app.kubernetes.io/name: ui-comp

Pod labels:

template:
    metadata:
      creationTimestamp: null
      labels:
        app: ui-comp

You need to correct either one of them so that they match.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM