[英]Helm kubernetes on AKS Pod CrashLoopBackOff
我正在嘗試通過 Azure Kubernetes 服務上的 helm kubernetes 部署一個簡單的 nodejs 應用程序,但在拉取我的圖像后,它顯示CrashLoopBackOff
。
這是我迄今為止嘗試過的:
我的Dockerfile
:
FROM node:6
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
我的server.js
:
'use strict';
const express = require('express');
const PORT = 32000;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('Hello world from container.\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
我已將此圖像推送到 ACR。
新更新:這是
kubectl describe pod POD_NAME
的完整輸出:
Name: myrel02-mychart06-5dc9d4b86c-kqg4n
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: aks-nodepool1-19665249-0/10.240.0.6
Start Time: Tue, 05 Feb 2019 11:31:27 +0500
Labels: app.kubernetes.io/instance=myrel02
app.kubernetes.io/name=mychart06
pod-template-hash=5dc9d4b86c
Annotations: <none>
Status: Running
IP: 10.244.2.5
Controlled By: ReplicaSet/myrel02-mychart06-5dc9d4b86c
Containers:
mychart06:
Container ID: docker://c239a2b9c38974098bbb1646a272504edd2d199afa50f61d02a0ce335fe60660
Image: registry-1.docker.io/arycloud/docker-web-app:0.5
Image ID: docker-pullable://registry-1.docker.io/arycloud/docker-web-app@sha256:4faab280d161b727e0a6a6d9dfb52b22cf9c6cd7dd07916d6fe164d9af5737a7
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 05 Feb 2019 11:39:56 +0500
Finished: Tue, 05 Feb 2019 11:40:22 +0500
Ready: False
Restart Count: 7
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
KUBERNETES_PORT_443_TCP_ADDR: cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io
KUBERNETES_PORT: tcp://cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io:443
KUBERNETES_SERVICE_HOST: cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gm49w (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-gm49w:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gm49w
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/myrel02-mychart06-5dc9d4b86c-kqg4n to aks-nodepool1-19665249-0
Normal Pulling 10m kubelet, aks-nodepool1-19665249-0 pulling image "registry-1.docker.io/arycloud/docker-web-app:0.5"
Normal Pulled 10m kubelet, aks-nodepool1-19665249-0 Successfully pulled image "registry-1.docker.io/arycloud/docker-web-app:0.5"
Warning Unhealthy 9m30s (x6 over 10m) kubelet, aks-nodepool1-19665249-0 Liveness probe failed: Get http://10.244.2.5:80/: dial tcp 10.244.2.5:80: connect: connection refused
Normal Created 9m29s (x3 over 10m) kubelet, aks-nodepool1-19665249-0 Created container
Normal Started 9m29s (x3 over 10m) kubelet, aks-nodepool1-19665249-0 Started container
Normal Killing 9m29s (x2 over 9m59s) kubelet, aks-nodepool1-19665249-0 Killing container with id docker://mychart06:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 9m23s (x7 over 10m) kubelet, aks-nodepool1-19665249-0 Readiness probe failed: Get http://10.244.2.5:80/: dial tcp 10.244.2.5:80: connect: connection refused
Normal Pulled 5m29s (x6 over 9m59s) kubelet, aks-nodepool1-19665249-0 Container image "registry-1.docker.io/arycloud/docker-web-app:0.5" already present on machine
Warning BackOff 22s (x33 over 7m59s) kubelet, aks-nodepool1-19665249-0 Back-off restarting failed container
更新:
docker logs CONTAINER_ID
輸出:
> nodejs@1.0.0 start /usr/src/app
> node server.js
Running on http://0.0.0.0:32000
我怎樣才能避免這個問題?
提前致謝!
正如我從kubectl describe pod
命令輸出中看到的,Pod 內的 Container 已經完成,退出代碼為 0(@4c74356b41 在評論中提到了這一點)。 Reason: Completed
,說明成功完成,沒有任何錯誤/問題。 但是 Pod 的生命周期很短,因此 Kubernetes 會不斷調度新的 Pod,但是 Liveness 和 Readiness 探針仍然無法對容器的健康狀況進行檢測。
為了保持 Pod 運行,您必須在容器內部指定一個任務(進程),以便能夠持續運行。 關於如何解決此類問題,有很多討論和解決方案,可以在此處找到更多提示。
kubectl logs 命令僅在 pod 啟動並運行時才有效。 如果不是,您可以使用 kubectl events 命令。 它會給你一些事件日志,有時(根據我的經驗)還會給你關於正在發生的事情的線索。
kubectl get events -n <your_app_namespace> --sort-by='.metadata.creationTimestamp'
默認情況下,它不對事件進行排序,因此使用 --sort-by 標志。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.