简体   繁体   中英

Kubernetes not creating docker container

I am really having trouble debugging this and can use some help. I am successfully staring a kubernetes service and deployment using a working docker image.

My service file:

apiVersion: v1
kind: Service
metadata:
  name: auth-svc
  labels:
    app: auth_v1
spec:
  type: NodePort
  ports:
  - port: 3000
    nodePort: 30000
    protocol: TCP
  selector:
    app: auth_v1

Deploy File:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-deploy
  labels:
    app: auth_v1
spec:
  revisionHistoryLimit: 5
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  replicas: 3
  selector:
    matchLabels:
      app: auth_v1
  template:
    metadata:
      labels:
        app: auth_v1
    spec:
      containers:
      - name: auth-pod
        image: index.docker.io/XXX/auth
        command: [ "yarn", "start-staging" ]
        imagePullPolicy: Always
        ports:
        - containerPort: 3000
       imagePullSecrets:
       - name: myregistrykey

kubectl get pods shows that the pods are up and running. I have tested jumping into the pod/conatiner with shell and tried running my application and it works. When I run kubectl describe auth-deploy I am seeing a container listed as auth-pod. However, I am not seeing any containers when I run docker ps or docker ps -a . Also, the logs for my pods show nothing. Is there something I am doing wrong?

For reference, here is my Dockerfile:

FROM node:8.11.2-alpine AS build

LABEL maintainer="info@XXX.com"

# Copy Root Dir & Set Working Dir
COPY . /src
WORKDIR /src

# Build & Start Our App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
RUN yarn build-staging

# Build Production Image Using Node Container
FROM node:8.11.2-alpine AS production

# Copy Build to Image
COPY --from=build /src/.next /src/.next/
COPY --from=build /src/production-server /src/production-server/
COPY --from=build /src/static /src/static/
COPY --from=build /src/package.json /src
WORKDIR /src

# Install Essential Pacakges & Start App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install

# Expose Ports Needed
EXPOSE 3000

VOLUME [ "/src/log" ]

# Start App
CMD [ "yarn", "start-staging" ]

Is it possible that you are running docker ps on the K8s-master instead of where the pods are located?

You can find out where your pods are running by running the command below:

$ kubectl describe pod auth-deploy

It should return something similar to below (in my case it's a percona workload):

$ kubectl describe pod percona
Name:           percona-b98f87dbd-svq64
Namespace:      default
Node:           ip-xxx-xx-x-xxx.us-west-2.compute.internal/xxx.xx.x.xxx

Get the IP, SSH into the node, and run docker ps locally from the node your container is located.

$ docker ps | grep percona
010f3d529c55        percona                      "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes                            k8s_percona_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
616d70e010bc        k8s.gcr.io/pause-amd64:3.1   "/pause"                 8 minutes ago       Up 7 minutes                            k8s_POD_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0

Another possibility is that you might be using different container runtime such as rkt, containerd, and lxd instead of docker.

Kubernetes pods are made of grouped containers and running on the dedicated node.

Kubernetes are managing directions where to create pods and their lifecycle. Kubernetes configuration consists of worker nodes and the master server. The master server is able to connect to nodes, create containers, and bond them into pods. The master node is designed to run only managing commands like kubectl , cluster state database etcd , and others daemons required to keep cluster up and running.

docker ps

shows nothing in this case.

To get list of running pods:

kubectl get pods

You can then connect to pod already running on node:

kubectl attach -i <podname>

Back to your question.

If you are interested in how Kubernetes are working with containers including your application image and Kubernetes infrastructure, you have to obtain node's IP address first:

kubectl describe pod <podname> | grep ^Node:

or by:

kubectl get pods -o wide

Next connect to the node via ssh and then:

 docker ps

You will see there are containers including the one you are looking for.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM