简体   繁体   中英

kubernetes cannot pull local image

I am using kube.netes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kube.netes I get an image pull error?????

MY POD YAML

kind: Pod
apiVersion: v1
metadata:
  name: yumserver
  labels:
    name: frontendhttp
spec:
  containers:
    - name: myfrontend
      image: my/nginx:latest
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  imagePullSecrets:
    - name: myregistrykey

  volumes:
    - name: mypd
      persistentVolumeClaim:
       claimName: myclaim-1

MY KUBE.NETES COMMAND

kubectl create -f pod-yumserver.yaml

THE ERROR

kubectl describe pod yumserver


Name: yumserver
Namespace: default
Image(s):   my/nginx:latest
Node:       127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels:     name=frontendhttp
Status:     Pending
Reason:     
Message:    
IP:     172.17.0.2
Controllers:    <none>
Containers:
  myfrontend:
    Container ID:   
    Image:      my/nginx:latest
    Image ID:       
    QoS Tier:
      memory:       BestEffort
      cpu:      BestEffort
    State:      Waiting
      Reason:       ErrImagePull
    Ready:      False
    Restart Count:  0
    Environment Variables:
Conditions:
  Type      Status
  Ready     False 
Volumes:
  mypd:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  myclaim-1
    ReadOnly:   false
  default-token-64w08:
    Type:   Secret (a secret that should populate this volume)
    SecretName: default-token-64w08
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason          Message
  --------- --------    -----   ----            -------------           --------    ------          -------
  13s       13s     1   {default-scheduler }                    Normal      Scheduled       Successfully assigned yumserver to 127.0.0.1
  13s       13s     1   {kubelet 127.0.0.1}                 Warning     MissingClusterDNS   kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  12s       12s     1   {kubelet 127.0.0.1} spec.containers{myfrontend} Normal      Pulling         pulling image "my/nginx:latest"
  8s        8s      1   {kubelet 127.0.0.1} spec.containers{myfrontend} Warning     Failed          Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
  8s        8s      1   {kubelet 127.0.0.1}                 Warning     FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"

So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16

For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.

Run eval $(minikube docker-env) before building your image.

Full answer here: https://stackoverflow.com/a/40150867

This should work irrespective of whether you are using minikube or not :

1) Start a local registry container:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

2) Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :

docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>

If TAG for your local image is <none> , you can simply do:

docker tag <local-image-repository> localhost:5000/<local-image-name>

3) Push to local registry :

docker push localhost:5000/<local-image-name>

This will automatically add the latest tag to localhost:5000/<local-image-name> . You can check again by doing docker images .

4) In your yaml file, set imagePullPolicy to IfNotPresent :

...
spec:
  containers:
  - name: <name>
    image: localhost:5000/<local-image-name>
    imagePullPolicy: IfNotPresent
...

That's it. Now your ImagePullError should be resolved.

The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest . I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.

just add imagePullPolicy to your deployment file it worked for me

 spec:
  containers:
  - name: <name>
    image: <local-image-name>
    imagePullPolicy: Never

in your case your yaml file should have imagePullPolicy: Never see below

kind: Pod
apiVersion: v1
metadata:
  name: yumserver
  labels:
name: frontendhttp
spec:
  containers:
    - name: myfrontend
      image: my/nginx:latest
      imagePullPolicy: Never
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  imagePullSecrets:
    - name: myregistrykey

  volumes:
    - name: mypd
      persistentVolumeClaim:
       claimName: myclaim-1

found this here https://keepforyourself.com/docker/run-a-kube.netes-pod-locally/

Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:

minikube start --vm-driver kvm

Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .

if you then type docker ps you should see a bunch of minikube containers already running. kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:

yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python 

If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.

Run the following command:

eval $(minikube docker-env)

Note - This command will need to be repeated anytime you close and restart the terminal session.

Afterward, you can build your image:

docker build -t USERNAME/REPO .

Update, your pod manifest as shown above and then run:

kubectl apply -f myfile.yaml

Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (ie not a remote cluster).

(Right click on Docker icon, then select "Kubernetes" in menu)

All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.

Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.

  1. Pull latest nginx image

    docker pull nginx

    docker tag nginx:latest test:test8970

  2. Create a deployment kubectl run test --image=test:test8970 It won't go to docker registry to pull the image. It will bring up the pod instantly. 在此处输入图片说明

  3. And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error. 在此处输入图片说明

  4. Also if you change the imagePullPolicy: Never . It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull . 在此处输入图片说明

kind: Deployment
metadata:
  labels:
    run: test
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      run: test
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: test
    spec:
      containers:
      - image: test:test8070
        name: test
        imagePullPolicy: Never

Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml , everything was hunky-dory. Here's some example configuration:

<plugin>
    <groupId>io.fabric8</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>0.27.2</version>
    <configuration>
        <images>
            
        </images>
    </configuration>
</plugin>

I was facing similar issue .Image was present in local but k8s was not able to pick it up. So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command. Rebuilt the image and the redeployed the deployment yaml ,and it worked

ContainerD (and Windows)

I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest , so the comment from Timo Reimann wasn't relevant.

Also, on the node machine, the image showed up when using nerdctl image . However they didn't show up in crictl images .

Thanks to a comment on Github , I found out that the actual problem is a different namespace of ContainerD.

As shown by the following two commands, images are not automatically build in the correct namespace:

ctr -n default images ls    # shows the application images (wrong namespace)
ctr -n k8s.io images ls     # shows the base images

To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:

ctr --namespace k8s.io image import exported-app-image.tar

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM