[英]How to cause an intentional restart of a single kubernetes pod
I am testing a log previous command and for that I need a pod to restart.我正在测试 log previous 命令,为此我需要一个 pod 来重新启动。
I can get my pods using a command like我可以使用类似的命令获取我的豆荚
kubectl get pods -n $ns -l $label
Which shows that my pods did not restart so far.这表明我的 pod 到目前为止还没有重新启动。 I want to test the command:
我想测试命令:
kubectl logs $podname -n $ns --previous=true
That command fails because my pod did not restart making the --previous=true
switch meaningless.该命令失败,因为我的 pod 没有重新启动,使
--previous=true
开关毫无意义。
I am aware of this command to restart pods when configuration changed:我知道此命令可在配置更改时重新启动 pod:
kubectl rollout restart deployment myapp -n $ns
This does not restart the containers in a way that is meaningful for my log command test but rather terminates the old pods and creates new pods (which have a restart count of 0)这不会以对我的日志命令测试有意义的方式重新启动容器,而是终止旧 Pod 并创建新 Pod(重新启动计数为 0)
I tried various versions of exec to see if I can shut them down from within but most commands I would use are not found in that container我尝试了各种版本的 exec 以查看是否可以从内部关闭它们,但是在该容器中找不到我会使用的大多数命令
kubectl exec $podname -n $ns -- shutdown
kubectl exec $podname -n $ns -- shutdown now
kubectl exec $podname -n $ns -- halt
kubectl exec $podname -n $ns -- poweroff
How can I use a kubectl command to forcefully restart the pod with it retaining its identity and the restart counter increasing by one so that my test log command has a previous instance to return the logs from.如何使用 kubectl 命令强制重新启动 pod,并保留其身份,并且重新启动计数器增加 1,以便我的测试日志命令有一个先前的实例可以从中返回日志。
EDIT : Connecting to the pod is well described .编辑: 很好地描述了连接到 pod。
kubectl -n $ns exec --stdin --tty $podname -- /bin/bash
The process list shows only a handful running processes:进程列表仅显示少数正在运行的进程:
ls -1 /proc | grep -Eo "^[0-9]{1,5}$"
proc 1 seems to be the one running the pod. proc 1 似乎是运行 pod 的那个。
kill 1
does nothing, not even kill the proc with pid 1 kill 1
什么都不做,甚至不使用 pid 1 杀死 proc
I am still looking into this at the moment.目前我还在研究这个。
There are different ways to achieve your goal.有不同的方法可以实现您的目标。 I'll describe below most useful options.
我将在下面描述最有用的选项。
Most correct and efficient way - restart the pod on container runtime level.最正确和有效的方法 - 在容器运行时级别重新启动 pod。
I tested this on Google Cloud Platform - GKE and minikube with docker
driver.我测试了在谷歌云平台- GKE与minikube
docker
驱动程序。
You need to ssh
into the worker node where the pod is running.您需要通过
ssh
进入运行 pod 的工作节点。 Then find it's POD ID
:然后找到它的
POD ID
:
$ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9863a993e0396 87a94228f133e 3 minutes ago Running nginx-3 2 6d17dad8111bc
OR或者
$ crictl pods -s ready
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
6d17dad8111bc About an hour ago Ready nginx-3 default 2 (default)
Then stop it:然后停止它:
$ crictl stopp 6d17dad8111bc
Stopped sandbox 6d17dad8111bc
After some time, kubelet
will start this pod again (with different POD ID in CRI, however kubernetes cluster treats this pod as the same):一段时间后,
kubelet
将再次启动这个 pod(在 CRI 中使用不同的 POD ID,但是 kubernetes 集群将这个 pod 视为相同):
$ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f5f0442841899 87a94228f133e 41 minutes ago Running nginx-3 3 b628e1499da41
This is how it looks in cluster:这是它在集群中的样子:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3 1/1 Running 3 48m
Getting logs with --previous=true
flag also confirmed it's the same POD for kubernetes.使用
--previous=true
标志获取日志也确认它与--previous=true
是相同的 POD。
It works with most images, however not always.它适用于大多数图像,但并非总是如此。
Eg I tested on simple pod with nginx
image:例如,我使用
nginx
图像在简单的 pod 上进行了测试:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 27h
$ kubectl exec -it nginx -- /bin/bash
root@nginx:/# kill 1
root@nginx:/# command terminated with exit code 137
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 27h
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.