[英]Is there a way to gracefully end a pod with the Kubernetes client-go?
The main question is if there is a way to finish a pod from the client-go sdk , I'm not trying to delete a pod, I just want to finish it with a Phase-Status: Completed .主要问题是,是否有办法从client-go sdk完成 pod,我不是要删除 pod,我只想使用 Phase-Status: Completed 完成它。
In the code, I'm trying to update the pod phase but It doesn't work, It does not return an error or panic but The pod does not finish.在代码中,我正在尝试更新 pod 阶段,但它不起作用,它没有返回错误或恐慌,但 pod 没有完成。 My code:
我的代码:
func main() {
// creates the in-cluster config
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
for {
pods, err := clientset.CoreV1().Pods("ns").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, pod := range pods.Items {
podName:= pod.Name
if strings.Contains(strings.ToLower(podName), "single-condition") {
fmt.Println("get pods metadatada")
fmt.Println(pod.Name)
fmt.Printf("pod.Name %s \n", pod.Name)
fmt.Printf("Status.Phase %s \n", pod.Status.Phase)
fmt.Printf("PodIP %s \n", pod.Status.PodIP)
containers := pod.Status.ContainerStatuses
if len(containers) > 0 {
for _ ,c := range containers {
fmt.Printf("c.Name %s \n", c.Name)
fmt.Printf("c.State %s \n", c.State)
fmt.Printf("c.State.Terminated %s \n", c.State.Terminated)
stateTerminated := c.State.Terminated
stateRunning := c.State.Running
if stateTerminated == nil && stateRunning != nil {
fmt.Printf("c.State.Terminated %s \n", c.State.Terminated)
fmt.Printf("stateRunning Reason: %s\n", reflect.TypeOf(c.State.Running))
getPod, getErr := clientset.CoreV1().Pods("ns").Get(context.TODO(), "single-condition-pipeline-9rqrs-1224102659" , metav1.GetOptions{})
if getErr != nil {
fmt.Println("error1")
panic(fmt.Errorf("Failed to get: %v", getErr))
}
fmt.Println("update values")
fmt.Printf(" getPodName %d \n", getPod.Name)
getPod.Status.Phase = "Succeeded"
fmt.Println("updated status phase")
getContainers := getPod.Status.ContainerStatuses
fmt.Printf("len get container %d \n", len(getContainers))
_, updateErr := clientset.CoreV1().Pods("argo-workflows").Update(context.TODO(), getPod, metav1.UpdateOptions{})
fmt.Println("commit update")
if updateErr != nil {
fmt.Println("error updated")
panic(fmt.Errorf("Failed to update: %v", updateErr))
}
} else {
fmt.Printf("c.State.Terminated %s \n", c.State.Terminated.Reason)
//fmt.Println("Not finished ready!!!")
//fmt.Printf("c.State.Running %s \n", c.State.Running)
//fmt.Printf("c.State.Waiting %s \n", c.State.Waiting)
}
}
}
}
}
time.Sleep(10 * time.Second)
}
}
and some logs:和一些日志:
single-condition-pipeline-9rqrs-1224102659
pod.Name single-condition-pipeline-9rqrs-1224102659
Status.Phase Running
PodIP XXXXXXXXXXXX
c.Name main
---------------------------------------------------------------------------------------------
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
commit update
c.Name wait
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
---------------------------------------------------------------------------------------------
commit update
---------------------------------------------------------------------------------------------
get pods metadatada
single-condition-pipeline-9rqrs-1224102659
pod.Name single-condition-pipeline-9rqrs-1224102659
Status.Phase Running
PodIP XXXXXXXXXX
c.Name main
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
commit update
c.Name wait
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
commit update
so here: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-status , It mentions a Patch but I don't know how to use it, so if somebody could help me or if there is another way to finish it.所以在这里: https : //kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-status ,它提到了一个补丁,但我不知道如何使用它,所以如果有人可以提供帮助我或者如果有另一种方法来完成它。
You cannot set the phase
or anything else in the Pod status
field, it is read only .您不能在 Pod
status
字段中设置phase
或其他任何内容,它是只读的。 According to the Pod Lifecycle documentation your pod will have a phase of Succeeded
after "All containers in the Pod have terminated in success, and will not be restarted."按照波德生命周期文档您的吊舱将有一个阶段
Succeeded
后,“在群所有容器中的成功已经终止,并不会重新启动。” So this will only happen if you can cause all of your pod's containers to exit with status code 0
and if the pod restartPolicy
is set to onFailure
or Never
, if it is set to Always
(the default) then the containers will eventually restart and your pod will eventually return to the Running
phase.因此,只有当您可以导致所有 pod 的容器以状态代码
0
退出并且 pod restartPolicy
设置为onFailure
或Never
,如果设置为Always
(默认值),那么容器最终将重新启动并且您的pod 最终会返回到Running
阶段。
In summary, you cannot do what you are attempting to do via the Kube API directly.总之,您无法直接通过 Kube API 执行您尝试执行的操作。 You must:
你必须:
restartPolicy
that can support the Succeeded
phase.Succeeded
阶段的restartPolicy
。SIGINT
or SIGTERM
, or possibly by commanding it via its own API.SIGINT
或SIGTERM
,或者可能是通过它自己的 API 命令它。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.