[英]Linkerd inbound port annotation leads to "Failed to bind inbound listener"
We are using Linkerd 2.11.1 on Azure AKS Kube.netes.我们在 Azure AKS Kube.netes 上使用 Linkerd 2.11.1。 Amongst others there is a Deployment using using an Alpine Linux image containing Apache/mod_php/PHP8 serving an API. HTTPS is resolved by Traefik v2 with cert-manager, so that in coming traffic to the APIs is on port 80. The Linkerd proxy container is injected as a Sidecar.其中有一个使用 Alpine Linux 图像的部署,其中包含服务于 API 的 Apache/mod_php/PHP8。HTTPS 由带有证书管理器的 Traefik v2 解析,因此传入 API 的流量位于端口 80 上。Linkerd 代理容器作为 Sidecar 注入。
Recently I saw that the API containers return 504 errors during a short period of time when doing a Rolling deployment.最近看到API容器在做Rolling部署的时候短时间内返回504错误。 In the Sidecars log, I found the following:在 Sidecars 日志中,我发现了以下内容:
[ 0.000590s] INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[ 0.001062s] INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[ 0.001078s] INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[ 0.001081s] INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[ 0.001083s] INFO ThreadId(01) linkerd2_proxy: Tap interface on 0.0.0.0:4190
[ 0.001085s] INFO ThreadId(01) linkerd2_proxy: Local identity is default.my-api.serviceaccount.identity.linkerd.cluster.local
[ 0.001088s] INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[ 0.001090s] INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[ 0.014676s] INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity: default.my-api.serviceaccount.identity.linkerd.cluster.local
[ 3674.769855s] INFO ThreadId(01) inbound:server{port=80}: linkerd_app_inbound::detect: Handling connection as opaque timeout=linkerd_proxy_http::version::Version protocol detection timed out after 10s
My guess is that this detection leads to the 504 errors somehow.我的猜测是这种检测以某种方式导致了 504 错误。 However, if I add the linkerd inbound port annotation to the pod template (terraform syntax):但是,如果我将 linkerd 入站端口注释添加到 pod 模板(terraform 语法):
resource "kubernetes_deployment" "my_api" {
metadata {
name = "my-api"
namespace = "my-api"
labels = {
app = "my-api"
}
}
spec {
replicas = 20
selector {
match_labels = {
app = "my-api"
}
}
template {
metadata {
labels = {
app = "my-api"
}
annotations = {
"config.linkerd.io/inbound-port" = "80"
}
}
I get the following:我得到以下信息:
time="2022-03-01T14:56:44Z" level=info msg="Found pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
time="2022-03-01T14:56:44Z" level=info msg="Found pre-existing CSR: /var/run/linkerd/identity/end-entity/csr.der"
[ 0.000547s] INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
thread 'main' panicked at 'Failed to bind inbound listener: Os { code: 13, kind: PermissionDenied, message: "Permission denied" }', /github/workspace/linkerd/app/src/lib.rs:195:14
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Can somebody tell me why it fails to bind the inbound listener?有人能告诉我为什么它无法绑定入站侦听器吗?
Any help is much appreciated,任何帮助深表感谢,
thanks,谢谢,
Pascal帕斯卡
Found it: Kube.netes sends asynchronuously requests to shutdown the pods and to no longer send traffic to them.找到它:Kube.netes 发送异步请求以关闭 pod 并不再向它们发送流量。 And if the pod shuts down faster than it's removal from the IP lists, it can receive requests when already being dead.如果 pod 关闭的速度快于它从 IP 列表中删除的速度,则它可以在已经死亡时接收请求。
To fix this, I added a preStop
lifecycle hook to the application container:为了解决这个问题,我向应用程序容器添加了一个preStop
生命周期挂钩:
lifecycle {
pre_stop {
exec {
command = ["/bin/sh", "-c" , "sleep 5"]
}
}
}
and the following annotation to pod template:以及 pod 模板的以下注释:
annotations = {
"config.alpha.linkerd.io/proxy-wait-before-exit-seconds" = "10"
}
Documented here:记录在这里:
https://linkerd.io/2.11/tasks/graceful-shutdown/ https://linkerd.io/2.11/tasks/graceful-shutdown/
and here:和这里:
https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304 https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304
annotations = {
"config.linkerd.io/inbound-port" = "80"
}
I don't think you want this setting.我不认为你想要这个设置。 Linkerd will transparently proxy connections without you setting anything. Linkerd 将透明地代理连接而无需您进行任何设置。
This setting configures Linkerd's proxy to try to listen on port 80. This would likely conflict with your web server's port configuration;此设置将 Linkerd 的代理配置为尝试侦听端口 80。这可能会与您的 web 服务器的端口配置发生冲突; but the specific error you're hitting is that the Linkerd proxy does not run as root and so it does not have permission to bind port 80.但是您遇到的具体错误是 Linkerd 代理没有以 root 用户身份运行,因此它没有绑定端口 80 的权限。
I'd expect it all to work if you removed that annotation:)如果您删除该注释,我希望这一切都能正常工作:)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.