简体   繁体   English

(Kube.netes + Minikube) 无法从本地注册表获取 docker 图像

[英](Kubernetes + Minikube) can't get docker image from local registry

I have setup docker on my machine and also minikube which have docker inside it, so probably i have two docker instances running on different VM我在我的机器上设置了 docker,还有 minikube,里面有 docker,所以我可能有两个 docker 实例在不同的 VM 上运行

I build an image and tag it then push it to local registry and it pushed successfully and i can pull it from registry too and also when i run curl to get tags list i got result, and here are what i did我构建一个图像并标记它,然后将它推送到本地注册表,它成功推送,我也可以从注册表中提取它,当我运行 curl 获取标记列表时,我得到了结果,这就是我所做的

1- docker build -t 127.0.0.1:5000/eliza/console:0.0.1 .
2- docker run -d -p 5000:5000 --name registry registry:2
3- docker tag a3703d02a199 127.0.0.1:5000/eliza/console:0.0.1
4- docker push 127.0.0.1:5000/eliza/console:0.0.1
5- curl -X GET http://127.0.0.1:5000/v2/eliza/console/tags/list

all above steps are working fine with no problems at all.以上所有步骤都可以正常工作,没有任何问题。

My problem is when i run minikube and try to access this image in local registry inside it我的问题是当我运行 minikube 并尝试在其中的本地注册表中访问此图像时

So when i run next commands所以当我运行下一个命令时

1- sudo minikube start --insecure-registry 127.0.0.1:5000
2- eval $(minikube docker-env)
3- minikube ssh
4- curl -X GET http://127.0.0.1:5000/v2/eliza/console/tags/list

in last step (point 4) it gave me next message在最后一步(第 4 点)它给了我下一条消息

curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused curl: (7) 无法连接到 127.0.0.1 端口 5000:连接被拒绝

So i can access image registry from my machine but not from minikube which make a problems of course with me when i deploy this image using Kube.netes on minikube and make deploy failed due to can't connect to http://127.0.0.1:5000所以我可以从我的机器访问图像注册表,但不能从 minikube 访问图像注册表,当我在 minikube 上使用 Kube.netes 部署此图像并由于无法连接到http://127.0.0.1而使部署失败时,这当然会给我带来问题:5000

Can you help me configuring minikube to see my local registry so my problem will be solved then i can deploy image to minikube using kube.netes successfully?你能帮我配置 minikube 以查看我的本地注册表,这样我的问题就会得到解决,然后我就可以使用 kube.netes 成功地将图像部署到 minikube 了吗?

UPDATE更新

I am using this yaml file (i named it ConsolePre.yaml ) to deploy my image using kube.netes我正在使用这个 yaml 文件(我将其命名为ConsolePre.yaml )来使用 kube.netes 部署我的图像

apiVersion: v1
  kind: Service
  metadata:
    name: tripbru-console
    labels:
      app: tripbru-console
  spec:
    ports:
      - port: 9080
        targetPort: 9080
        nodePort: 30181
    selector:
      app: tripbru-console
      tier: frontend
    type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tripbru-console
  labels:
    app: tripbru-console
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: tripbru-console
        tier: frontend
    spec:
      containers:
      - image: docker.local:5000/eliza/console:0.0.1
        name: tripbru-console
        ports:
        - containerPort: 9080
          name: tripbru-console

and when i run next command to apply changes当我运行下一个命令来应用更改时

sudo kubectl apply -f /PATH_TO_YAML_FILE/ConsolePre.yaml sudo kubectl apply -f /PATH_TO_YAML_FILE/ConsolePre.yaml

the result is结果是

NAME                                      READY     STATUS         RESTARTS   AGE
po/tripbru-console-1655054400-x3g87       0/1       ErrImagePull   0          1m

and when i run describe command当我运行描述命令时

sudo kubectl describe pod tripbru-console-1655054400-x3g87 sudo kubectl 描述 pod tripbru-console-1655054400-x3g87

i found next message in description result我在描述结果中找到了下一条消息

Error response from daemon: {"message":"Get https://docker.local:5000/v1/_ping : dial tcp: lookup docker.local on 10.0.2.3:53: read udp 10.0.2.15:57792->10.0.2.3:53: i/o timeout"}来自守护程序的错误响应:{“消息”:“获取https://docker.local:5000/v1/_ping :拨打 tcp:在 10.0.2.3:53 上查找 docker.local:读取 udp 10.0.2.3:53:读取 udp 10.0.2.90->5177.90->5177.925-> .2.3:53: i/o 超时"}

and i configured docker.local xxx.xxx.xx.4 in minikube /etc/hosts so i don't know from where 10.0.2.3:53 and 10.0.2.15:57792 come from.我在 minikube /etc/hosts 中配置了 docker.local xxx.xxx.xx.4所以我不知道 10.0.2.3:53 和 10.0.2.15:57792 来自哪里。

So how can i solve this issue too.那么我该如何解决这个问题呢。

Thanks:)谢谢:)

The issue is your notion using 127.0.0.1 anywhere you want. 问题是您在任何地方使用127.0.0.1的想法。 This is wrong. 这是错的。

So if your machine IP is 192.168.0.101. 因此,如果您的机器IP是192.168.0.101。 Then below works 然后下面的工作

1- docker build -t 127.0.0.1:5000/eliza/console:0.0.1 .
2- docker run -d -p 5000:5000 --name registry registry:2
3- docker tag a3703d02a199 127.0.0.1:5000/eliza/console:0.0.1
4- docker push 127.0.0.1:5000/eliza/console:0.0.1
5- curl -X GET http://127.0.0.1:5000/v2/eliza/console/tags/list

Because docker run maps the registry to 127.0.0.1:5000 and 192.168.0.101:5000. 因为docker run将注册表映射到127.0.0.1:5000和192.168.0.101:5000。 Now on your machine only this 127.0.0.1 will work. 现在在你的机器上只有这个127.0.0.1可以工作。 Now when you use 现在你用的时候

3- minikube ssh

You get inside the minikube machine and that doesn't have a registry running on 127.0.0.1:5000. 你进入minikube机器并且没有在127.0.0.1:5000上运行的注册表。 So the error. 所以错误。 The registry is no reachable inside this machine using the machine machine IP. 使用机器机器IP在本机内无法访问注册表。

The way I usually solve this is issue is by using host name both locally and inside the other VMs. 我通常解决这个问题的方法是在本地和其他VM中使用主机名。

So on your machine create a entry in /etc/hosts 所以在你的机器上在/etc/hosts创建一个条目

docker.local 127.0.0.1

And change your commands to 并将命令更改为

1- docker build -t docker.local:5000/eliza/console:0.0.1 .
2- docker run -d -p 5000:5000 --name registry registry:2
3- docker tag a3703d02a199 docker.local:5000/eliza/console:0.0.1
4- docker push docker.local:5000/eliza/console:0.0.1
5- curl -X GET http://docker.local:5000/v2/eliza/console/tags/list

And then when you use minikube ssh , make a entry for docker.local in /etc/hosts 然后当你使用minikube ssh ,在/etc/hostsdocker.local创建一个条目

docker.local 192.168.0.101

Then curl -X GET http://docker.local:5000/v2/eliza/console/tags/list 然后curl -X GET http://docker.local:5000/v2/eliza/console/tags/list

Edit-1 编辑-1

For the TLS issue you need to Stop the docker service inside minikube 对于TLS问题,您需要在minikube中停止docker服务

systemctl stop docker

Then edit /etc/systemd/system/docker.service.d/10-machine.conf and change 然后编辑/etc/systemd/system/docker.service.d/10-machine.conf并进行更改

ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24 ExecStart = / usr / bin / docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem - tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider = virtualbox --insecure-registry 10.0.0.0/24

to

ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24 --insecure-registry docker.local:5000 --insecure-registry 192.168.1.4:5000 ExecStart = / usr / bin / docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem - tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider = virtualbox --insecure-registry 10.0.0.0/24 --insecure-registry docker.local:5000 - insecure-registry 192.168.1.4:5000

Then reload daemon and start the docker service 然后重新加载守护进程并启动docker服务

systemctl daemon-reload
systemctl start docker

After that try to pull 之后试着拉

docker pull docker.local:5000/eliza/console:0.0.1

And the command should work 命令应该有效

How to access Processes running on hostmachine from with in a Docker container? 如何在Docker容器中访问在hostmachine上运行的进程?

It is a popular question in the docker-land. 这是一个在码头工人土地上的流行问题。 See here. 看这里。 https://stackoverflow.com/a/24326540/6785908 There are other ways too, for example, For Docker on mac, docker.for.mac.localhost DNS name will resolve to the hostmachine https://stackoverflow.com/a/24326540/6785908还有其他方法,例如,对于Mac上的Docker, docker.for.mac.localhost DNS名称将解析为hostmachine

From https://docs.docker.com/docker-for-mac/networking/#i-cannot-ping-my-containers 来自https://docs.docker.com/docker-for-mac/networking/#i-cannot-ping-my-containers

The Mac has a changing IP address (or none if you have no network access). Mac具有更改的IP地址(如果您没有网络访问权限,则为无)。 From 17.06 onwards our recommendation is to connect to the special Mac-only DNS name docker.for.mac.localhost which will resolve to the internal IP address used by the host. 从17.06开始,我们建议连接到特殊的Mac-DNS DNS名称docker.for.mac.localhost,它将解析为主机使用的内部IP地址。

Assuming that primary purpose of this minikube is for local testing, there is an easier way deploy your docker container (This doesnt even need a local docker registry) 假设这个minikube的主要目的是用于本地测试,那么部署docker容器有一种更简单的方法(这甚至不需要本地docker注册表)

Method 2: Point your docker CLI to Docker daemon running within your minikube and then execute docker build command there. 方法2:将docker CLI指向在minikube中运行的Docker守护程序,然后在那里执行docker build命令。

First thing to understand here is, when you install docker in your machine, it has 2 parts, 1) a docker cli with which you can interact with docker daemon 2) A docker daemon. 首先要理解的是,当您在机器中安装docker时,它有2个部分,1)docker cli,您可以使用docker守护进程与docker守护进程进行交互2)docker守护进程。 In this method we will point our local docker cli to minikube's docker daemon and execute docker build . 在这种方法中,我们将本地docker cli指向minikube的docker守护进程并执行docker docker build

https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/getting-started-guides/minikube.md#reusing-the-docker-daemon https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/getting-started-guides/minikube.md#reusing-the-docker-daemon

quoting relevant parts here 在这里引用相关部分

When using a single VM of Kubernetes, it's really handy to reuse the minikube's built-in Docker daemon; 当使用Kubernetes的单个VM时,重用minikube的内置Docker守护进程非常方便; as this means you don't have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. 因为这意味着您不必在主机上构建docker注册表并将图像推入其中 - 您可以在与minikube相同的docker守护程序内部构建,从而加速本地实验。 Just make sure you tag your Docker image with something other than 'latest' and use that tag while you pull the image. 只需确保使用“最新”之外的其他内容标记Docker镜像,并在拉动图像时使用该标记。 Otherwise, if you do not specify version of your image, it will be assumed as :latest, with pull image policy of Always correspondingly, which may eventually result in ErrImagePull as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet. 否则,如果您没有指定图像的版本,它将被假定为:latest,使用始终相应的拉图像策略,这可能最终导致ErrImagePull,因为您可能没有任何版本的Docker镜像在默认情况下docker注册表(通常是DockerHub)。

To be able to work with the docker daemon on your mac/linux host use the docker-env command in your shell: 为了能够在mac / linux主机上使用docker守护进程,请在shell中使用docker-env命令:

eval $(minikube docker-env) eval $(minikube docker-env)

You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM: 您现在应该可以在主机mac / linux机器上的命令行上使用docker与minikube VM内的docker守护程序通信:

do a docker container list command : docker ps . 做一个docker container list命令: docker ps It should display even the containers related to kubernetes system (because now your cli is pointed to a docker daemon where your minikube is running). 它甚至应该显示与kubernetes系统相关的容器(因为现在你的cli指向你的minikube正在运行的docker守护进程)。

Now build your docker image. 现在构建您的docker镜像。 Then it will be available in the minikube for you. 然后它将在minikube中为您提供。

Minikube runs inside a docker container, so you should see it as a separate machine. Minikube 在 docker 容器中运行,因此您应该将其视为一台单独的机器。 Now, inside this machine Kube.netes runs, NOTE it is important to understand that we have the Minikube environment and the Kube.netes environment.现在,在这台机器内运行 Kube.netes,请注意,了解我们拥有 Minikube 环境和 Kube.netes 环境很重要。 This is important to understand because it is not the same to connect to the local registry from Minikube than from Kube.netes (which resides on Minikube).理解这一点很重要,因为从 Minikube 连接到本地注册表与从 Kube.netes(驻留在 Minikube 上)不同。 Here the difference: enter image description here这里的区别:在这里输入图像描述

When you create a Job / Deployment / Statefulset, the creation is done by minikube, so it does not know if there is any service that connects to our local registry in docker. Curiously, our "registry" service does work within pods, that is, once our Job / Deployment / Statefulset has been created, then you can access our "registry" service without problems.当你创建 Job / Deployment / Statefulset 时,创建是由 minikube 完成的,因此它不知道是否有任何服务连接到我们在 docker 中的本地注册中心。奇怪的是,我们的“注册中心”服务确实在 pod 内工作,即,一旦创建了我们的作业/部署/Statefulset,您就可以毫无问题地访问我们的“注册表”服务。 So what is the solution to all this?那么这一切的解决方案是什么? Minikube can easily connect to our local registry, through 192.168.49.1:5000. Minikube 可以通过 192.168.49.1:5000 轻松连接到我们的本地注册表。 If you want your Jobs / Deployments / Statefulsets to be created with images from a local registry, then just add 192.168.49.1:5000 to your image and voila.如果您希望使用本地注册表中的图像创建作业/部署/状态集,则只需将 192.168.49.1:5000 添加到您的图像中即可。 enter image description here On the other hand, if you want to be able to access your local registry from within pods, you will need a service and an endpoint. enter image description here另一方面,如果您希望能够从 pod 中访问您的本地注册表,您将需要一个服务和一个端点。

Considerations: Remember that it is very important that you allow minikube to access your local registry by:注意事项:请记住,允许 minikube 通过以下方式访问本地注册表非常重要:

minikube start --insecure-registries 192.168.49.1:5000

It is rare that minikube uses another ip other than 192.168.49.1, just in case it is better to check with: minikube 很少使用除 192.168.49.1 之外的另一个 ip,以防万一最好检查一下:

minikube ssh 'grep host.minikube.internal / etc / hosts | cut -f1 '

This is all assuming you have a registry created in docker with port 5000 exposed.这一切都假设您在 docker 中创建了一个注册表,并暴露了端口 5000。

您可以发出此命令将docker CLI指向minikube:eval $(minikube docker-env)然后您可以在那里构建图像或从任何位置导出它们并导入它们。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM