简体   繁体   English

这些kubernetes健康检查来自何处?

[英]Where are these kubernetes healthchecks coming from?

So I have deployments exposed behing a GCE ingress. 因此,我在GCE入口中暴露了部署。 On the deployment, implemented a simple readinessProbe on a working path, as follows : 在部署上,在工作路径上实现了一个简单的readinessProbe,如下所示:

    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /claim/maif/login/?next=/claim/maif
        port: 8888
        scheme: HTTP
      initialDelaySeconds: 20
      periodSeconds: 60
      successThreshold: 1
      timeoutSeconds: 1

Everything works well, the first healthchecks comes 20 seconds later, and answer 200 : 一切正常,第一次运行状况检查会在20秒后出现,并回答200:

{address space usage: 521670656 bytes/497MB} {rss usage: 107593728 bytes/102MB} [pid: 92|app: 0|req: 1/1] 10.108.37.1 () {26 vars in 377 bytes} [Tue Nov  6 15:13:41 2018] GET /claim/maif/login/?next=/claim/maif => generated 4043 bytes in 619 msecs (HTTP/1.1 200) 7 headers in 381 bytes (1 switches on core 0)

But, just after that, I get tons of other requests from other heathchecks, on / : 但是,在那之后,我在/上收到了来自其他健康检查的大量其他请求。

{address space usage: 523993088 bytes/499MB} {rss usage: 109850624 bytes/104MB} [pid: 92|app: 0|req: 2/2] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov  6 15:13:56 2018] GET / => generated 6743 bytes in 53 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 515702784 bytes/491MB} {rss usage: 100917248 bytes/96MB} [pid: 93|app: 0|req: 1/3] 10.132.0.20 () {24 vars in 277 bytes} [Tue Nov  6 15:13:56 2018] GET / => generated 1339 bytes in 301 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103759872 bytes/98MB} [pid: 93|app: 0|req: 2/4] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov  6 15:13:58 2018] GET / => generated 6743 bytes in 52 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103837696 bytes/99MB} [pid: 93|app: 0|req: 3/5] 10.132.0.21 () {24 vars in 277 bytes} [Tue Nov  6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 523993088 bytes/499MB} {rss usage: 109875200 bytes/104MB} [pid: 92|app: 0|req: 3/6] 10.132.0.4 () {24 vars in 275 bytes} [Tue Nov  6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)

As I understand it, the documentations says that 据我了解,文档说

The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. Ingress控制器首先寻找兼容的就绪性探测器,如果找到了它,则将其用作GCE负载平衡器的HTTP(S)运行状况检查。 If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. 如果没有就绪探测器,或者就绪探测器需要特殊的HTTP标头,则Ingress控制器会将GCE负载均衡器的HTTP运行状况检查指向“ /”。 This is an example of an Ingress that adopts the readiness probe from the endpoints as its health check. 这是一个Ingress的示例,该Ingress采用了来自端点的就绪探针作为其运行状况检查。

But I don't understand this behaviour. 但是我不理解这种行为。 How can I limit the healthchecks to be just the one I defined on my deployment ? 如何将运行状况检查限制为仅在部署中定义的运行状况检查?

Thanks, 谢谢,

Ok so this very well may not work. 好的,这可能无法正常工作。 I ran into a similar issue where my readiness probes we're not being respected. 我遇到了类似的问题,我的准备工作表明我们没有受到尊重。 I was able to edit this from the GCP console GUI interface. 我可以从GCP控制台GUI界面对此进行编辑。 Search for 'healthcheck' and then find the health checks created by GKE for the service. 搜索“运行状况检查”,然后找到GKE为该服务创建的运行状况检查。

I was able to change mine to TCP which made it work for some reason. 我能够将自己的TCP更改为TCP,这使其由于某些原因而起作用。

Worth a try. 值得一试。 Personally I ran into it when running a multi-region ingress so my setup is likely different but still relies on GCE-Ingress. 我个人在运行多区域入口时遇到了它,因此我的设置可能有所不同,但仍然依赖于GCE-Ingress。

You need to define ports in your deployment.yaml for port numbers used in readinessProbe like 您需要在Deployment.yaml中定义端口以用于就绪状态中使用的端口号

        ports:
      - containerPort: 8888
        name: health-check-port

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM