![](/img/trans.png)
[英]Publish docker container port and access that port from another docker container
[英]How to access kind control plane port from another docker container?
我正在使用kind create cluster --name kind
創建一個 kind 集群,我想從另一個 docker 容器訪問它,但是當我嘗試從一個容器 ( kubectl apply -f deployment.yml
) 應用 Kubernetes 文件時,我收到了這個錯誤:
The connection to the server 127.0.0.1:6445 was refused - did you specify the right host or port?
事實上,當我嘗試從容器中卷曲 kind control-plane 時,它是無法訪問的。
> docker run --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
curl: (7) Failed to connect to 127.0.0.1 port 6445 after 0 ms: Connection refused
然而,善良的控制平面正在發布到正確的端口,但只發布到本地主機。
> docker ps --format "table {{.Image}}\t{{.Ports}}"
IMAGE PORTS
kindest/node:v1.23.4 127.0.0.1:6445->6443/tcp
目前我找到的唯一解決方案是設置主機網絡模式。
> docker run --network host --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
Client sent an HTTP request to an HTTPS server.
該解決方案看起來不是最安全的。 有沒有另一種方法,比如將 kind 網絡連接到我的容器或我錯過的類似方法?
我不知道你為什么要這樣做。 但沒問題,我認為這可以幫助你:
首先,讓我們拉取你的 docker 鏡像:
❯ docker pull curlimages/curl
在我的 kind 集群中,我有 3 個控制平面節點和 3 個工作節點。 以下是我的 kind 集群的 pod:
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39dbbb8ca320 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:35327->6443/tcp so-cluster-1-control-plane
62b5538275e9 kindest/haproxy:v20220207-ca68f7d4 "haproxy -sf 7 -W -d…" 7 days ago Up 7 days 127.0.0.1:35625->6443/tcp so-cluster-1-external-load-balancer
9f189a1b6c52 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:40845->6443/tcp so-cluster-1-control-plane3
4c53f745a6ce kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:36153->6443/tcp so-cluster-1-control-plane2
97e5613d2080 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30081->30080/tcp so-cluster-1-worker2
0ca64a907707 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30080->30080/tcp so-cluster-1-worker
9c5d26caee86 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30082->30080/tcp so-cluster-1-worker3
我們在這里感興趣的容器是 haproxy 容器 (kindest/haproxy:v20220207-ca68f7d4),它的作用是對進入節點的流量進行負載均衡(在我們的示例中,尤其是控制平面節點。)我們可以看到我們主機的端口 35625 映射到 haproxy 容器的端口 6443。 (127.0.0.1:35625->6443/tcp)
因此,我們的集群端點是https://127.0.0.1:35625 ,我們可以在我們的 kubeconfig 文件 (~/.kube/config) 中確認這一點:
❯ cat .kube/config
apiVersion: v1
kind: Config
preferences: {}
users:
- name: kind-so-cluster-1
user:
client-certificate-data: <base64data>
client-key-data: <base64data>
clusters:
- cluster:
certificate-authority-data: <certificate-authority-dataBase64data>
server: https://127.0.0.1:35625
name: kind-so-cluster-1
contexts:
- context:
cluster: kind-so-cluster-1
user: kind-so-cluster-1
namespace: so-tests
name: kind-so-cluster-1
current-context: kind-so-cluster-1
讓我們在后台運行 curl 容器:
❯ docker run -d --network host curlimages/curl sleep 3600
ba183fe2bb8d715ed1e503a9fe8096dba377f7482635eb12ce1322776b7e2366
正如預期的那樣,我們無法通過 HTTP 請求偵聽 HTTPS 端口的端點:
❯ docker exec -it ba curl 127.0.0.1:35625
Client sent an HTTP request to an HTTPS server.
我們可以嘗試使用我們的 kubeconfig 中“certificate-authority-data”字段中的證書來檢查是否改變了什么(它應該):讓我們創建一個名為 my-ca.crt 的文件,其中包含證書的 stringData :
base64 -d <<< <certificate-authority-dataBase64dataFromKubeConfig> > my-ca.crt
由於 curl docker 鏡像的工作目錄是“/”,讓我們將我們的證書復制到容器中的這個位置並驗證它是否確實存在:
docker cp my-ca.crt ba183fe:/
❯ docker exec -it ba sh
/ $ ls my-ca.crt
my-ca.crt
讓我們再次嘗試我們的 curl 請求,但使用證書:
❯ docker exec -it ba curl --cacert my-ca.crt https://127.0.0.1:35625
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
您可以通過在 curl 請求中添加“--insecure”標志來獲得相同的結果:
❯ docker exec -it ba curl https://127.0.0.1:35625 --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
但是,我們無法使用匿名用戶訪問我們的集群! 因此,讓我們從 kubernetes 獲取令牌(參見https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/ ):
# Create a secret to hold a token for the default service account
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-token
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
一旦令牌控制器用令牌填充了秘密:
# Get the token value
❯ kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi
現在讓我們直接使用令牌執行 curl 命令!
❯ docker exec -it ba curl -X GET https://127.0.0.1:35625/api --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.18.0.5:6443"
}
]
}
有用。 我仍然不知道你為什么要這樣做,但我希望這對你有所幫助。
因為這不是你想要的,因為我在這里使用主機網絡,你可以使用這個: How to communicate between Docker containers via "hostname" as proposed @SergioSantiago 感謝你的評論!
猜測
沒有足夠的代表評論其他答案,但想評論最終對我有用的東西。
kind
host
網絡中運行,否則容器需要位於同一網絡中。kind-control-plane:6443
。 該端口不是下面示例中的暴露端口6443
NOT 38669
CONTAINER ID IMAGE PORTS 7f2ee0c1bd9a kindest/node:v1.25.3 127.0.0.1:38669->6443/tcp
# path/to/some/kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true # Don't use in Prod equivalent of --insecure on cli
server: https://<kind-control-plane container name>:6443 # NOTE port is internal container port
name: kind-kind # or whatever
contexts:
- context:
cluster: kind-kind
user: <some-service-account>
name: kind-kind # or whatever
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: <some-service-account>
user:
token: <TOKEN>
如果使用 docker-compose,您可以將 kind 網絡添加到容器中,例如:
#docker-compose.yml services: foobar: build: context: ./.config networks: - kind # add this container to the kind network volumes: - path/to/some/kube/config:/somewhere/in/the/container networks: kind: # define the kind network external: true # specifies that the network already exists in docker
如果運行新容器:
docker run --network kind -v path/to/some/kube/config:/somewhere/in/the/container <image>
容器已經運行?
docker network connect kind <container name>
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.