[英]kubernetes-client watch till all pods are up
I want to poll pod status in the namespace till all pod are up, I am using kubernetes-client library to check it.我想轮询命名空间中的 pod 状态,直到所有 pod 都启动,我正在使用 kubernetes-client 库来检查它。 What should be the exit condition when all pods are up.
当所有 pod 都启动时,退出条件应该是什么。 I need to close watch when all pods are up but it keeps polling till the timeout given while creating watch object
我需要在所有 pod 都启动时关闭手表,但它会一直轮询直到创建手表 object 时给出的超时
String file = "C:\\Project\\config.yaml";
String content = readFromFile(file);
InputStream kubeConfigStream = new ByteArrayInputStream(content.getBytes(StandardCharsets.UTF_8));
ApiClient client = getApiClient(kubeConfigStream);
Configuration.setDefaultApiClient(client);
// infinite timeout
OkHttpClient httpClient =
client.getHttpClient().newBuilder().readTimeout(0, TimeUnit.SECONDS).build();
client.setHttpClient(httpClient);
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
String namespace = "default";
@Cleanup
Watch<V1Pod> watch = Watch.createWatch(
client,
api.listNamespacedPodCall(namespace, null, true, null,
null, null, 20, null, null, 190, Boolean.TRUE, null),
new TypeToken<Watch.Response<V1Pod>>() {
}.getType());
for (Watch.Response<V1Pod> item : watch) {
V1PodStatus podStatus = item.object.getStatus();
String name = item.object.getMetadata().getName();
String status = podStatus.getPhase();
System.out.printf("NAME:" +name+"\t status"+status);
}
Get all pods details using below function使用以下 function 获取所有 pod 详细信息
V1PodList listNamespacePod =
coreV1Api.listNamespacedPod(
namespace,
null,
null,
null,
null,
null,
Integer.MAX_VALUE,
null,
null,
40,
Boolean.FALSE);
After that update live updated using watch之后使用手表实时更新
switch (eventType) {
case "ADDED":
case "MODIFIED":
checkContainerStatus(v1PodResponse, podName, namespacePods);
break;
case "DELETED":
namespacePods.remove(podName);
break;
case "FAILED":
namespacePods.put(podName, eventType);
break;
default:
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.