简体   繁体   English

Google Container Engine 上的 Kubernetes pod 不断重启,从未准备好

[英]Kubernetes pod on Google Container Engine continually restarts, is never ready

I'm trying to get a ghost blog deployed on GKE, working off of the persistent disks with WordPress tutorial .我正在尝试在 GKE 上部署一个幽灵博客, 使用 WordPress 教程处理永久磁盘 I have a working container that runs fine manually on a GKE node:我有一个可以在 GKE 节点上手动正常运行的工作容器:

docker run -d --name my-ghost-blog -p 2368:2368 -d us.gcr.io/my_project_id/my-ghost-blog

I can also correctly create a pod using the following method from another tutorial:我还可以使用另一个教程中的以下方法正确创建一个 pod:

kubectl run ghost --image=us.gcr.io/my_project_id/my-ghost-blog --port=2368

When I do that I can curl the blog on the internal IP from within the cluster, and get the following output from kubectl get pod :当我这样做时,我可以从集群内在内部 IP 上卷曲博客,并从kubectl get pod获得以下输出:

Name:       ghosty-nqgt0
Namespace:      default
Image(s):     us.gcr.io/my_project_id/my-ghost-blog
Node:       very-long-node-name/10.240.51.18
Labels:       run=ghost
Status:       Running
Reason:
Message:
IP:       10.216.0.9
Replication Controllers:  ghost (1/1 replicas created)
Containers:
  ghosty:
    Image:  us.gcr.io/my_project_id/my-ghost-blog
    Limits:
      cpu:    100m
    State:    Running
      Started:    Fri, 04 Sep 2015 12:18:44 -0400
    Ready:    True
    Restart Count:  0
Conditions:
  Type    Status
  Ready   True
Events:
  ...

The problem arises when I instead try to create the pod from a yaml file, per the Wordpress tutorial.根据Wordpress 教程,当我尝试从 yaml 文件创建 pod 时,问题就出现了。 Here's the yaml:这是yaml:

metadata:
  name: ghost
  labels:
    name: ghost
spec:
  containers:
    - image: us.gcr.io/my_project_id/my-ghost-blog
      name: ghost
      env:
        - name: NODE_ENV
          value: production
        - name: VIRTUAL_HOST
          value: myghostblog.com
      ports:
        - containerPort: 2368

When I run kubectl create -f ghost.yaml , the pod is created, but is never ready:当我运行kubectl create -f ghost.yaml ,pod 已创建,但从未准备好:

> kubectl get pod ghost
NAME      READY     STATUS    RESTARTS   AGE
ghost     0/1       Running   11         3m

The pod continuously restarts, as confirmed by the output of kubectl describe pod ghost : pod 不断重启,正如kubectl describe pod ghost的输出所证实的:

Name:       ghost
Namespace:      default
Image(s):     us.gcr.io/my_project_id/my-ghost-blog
Node:       very-long-node-name/10.240.51.18
Labels:       name=ghost
Status:       Running
Reason:
Message:
IP:       10.216.0.12
Replication Controllers:  <none>
Containers:
  ghost:
    Image:  us.gcr.io/my_project_id/my-ghost-blog
    Limits:
      cpu:    100m
    State:    Running
      Started:    Fri, 04 Sep 2015 14:08:20 -0400
    Ready:    False
    Restart Count:  10
Conditions:
  Type    Status
  Ready   False
Events:
  FirstSeen       LastSeen      Count From              SubobjectPath       Reason    Message
  Fri, 04 Sep 2015 14:03:20 -0400 Fri, 04 Sep 2015 14:03:20 -0400 1 {scheduler }                      scheduled Successfully assigned ghost to very-long-node-name
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD created   Created with docker id dbbc27b4d280
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD started   Started with docker id dbbc27b4d280
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      created   Created with docker id ceb14ba72929
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id ceb14ba72929
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD pulled    Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
  Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id 0b8957fe9b61
  Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      created   Created with docker id 0b8957fe9b61
  Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      created   Created with docker id edaf0df38c01
  Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id edaf0df38c01
  Fri, 04 Sep 2015 14:03:50 -0400 Fri, 04 Sep 2015 14:03:50 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id d33f5e5a9637
...

This cycle of created/started goes on forever, if I don't kill the pod.如果我不杀死 pod,这个创建/启动的循环将永远持续下去。 The only difference from the successful pod is the lack of a replication controller.与成功的 pod 的唯一区别是缺少复制控制器。 I don't expect this is the problem because the tutorial mentions nothing about rc.我不希望这是问题,因为教程没有提到 rc.local 。

Why is this happening?为什么会这样? How can I create a successful pod from config file?如何从配置文件创建成功的 pod? And where would I find more verbose logs about what is going on?我在哪里可以找到关于正在发生的事情的更详细的日志?

If the same docker image is working via kubectl run but not working in a pod, then something is wrong with the pod spec.如果同一个kubectl run镜像通过kubectl run但不能在 pod 中运行,则 pod 规范有问题。 Compare the full output of the pod as created from spec and as created by rc to see what differs by running kubectl get pods <name> -o yaml for both.比较从 spec 创建的和由 rc 创建的 pod 的完整输出,通过运行kubectl get pods <name> -o yaml来看看有什么不同。 Shot in the dark: is it possible the env vars specified in the pod spec are causing it to crash on startup?在黑暗中拍摄:pod 规范中指定的环境变量是否可能导致它在启动时崩溃?

Maybe you could use different restart Policy in the yaml file?也许您可以在 yaml 文件中使用不同的重启策略?

What you have I believe is equivalent to我相信你所拥有的等同于

- restartPolicy: Never

no replication controller.没有复制控制器。 You may try to add this line to yaml and set it to Always (and this will provide you with RC), or to OnFailure.您可以尝试将此行添加到 yaml 并将其设置为 Always(这将为您提供 RC)或 OnFailure。

https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy

Container logs may be useful, with kubectl logs容器日志可能很有用,使用 kubectl 日志

Usage:用法:

kubectl logs [-p] POD [-c CONTAINER] kubectl 日志 [-p] POD [-c 容器]

http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_logs.html http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_logs.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM