[英]oauth2-proxy authentication calls slow on kubernetes cluster with auth annotations for nginx ingress
We have secured some of our services on the K8S cluster using the approach described on this page .我们使用本页描述的方法保护了 K8S 集群上的一些服务。 Concretely, we have:具体来说,我们有:
nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.${var.hosted_zone}/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"
set on the service to be secured and we have followed this tutorial to only have one deployment of oauth2_proxy per cluster.设置要保护的服务,我们按照本教程在每个集群中只部署一个 oauth2_proxy。 We have 2 proxies set up, both with affinity to be placed on the same node as the nginx ingress.我们设置了 2 个代理,它们都具有关联性,要放置在与 nginx 入口相同的节点上。
$ kubectl get pods -o wide -A | egrep "nginx|oauth"
infra-system wer-exp-nginx-ingress-exp-controller-696f5fbd8c-bm5ld 1/1 Running 0 3h24m 10.76.11.65 ip-10-76-9-52.eu-central-1.compute.internal <none> <none>
infra-system wer-exp-nginx-ingress-exp-controller-696f5fbd8c-ldwb8 1/1 Running 0 3h24m 10.76.14.42 ip-10-76-15-164.eu-central-1.compute.internal <none> <none>
infra-system wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-wttss 1/1 Running 0 3h24m 10.76.15.52 ip-10-76-15-164.eu-central-1.compute.internal <none> <none>
infra-system wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-z998v 1/1 Running 0 3h24m 10.76.11.213 ip-10-76-9-52.eu-central-1.compute.internal <none> <none>
infra-system oauth2-proxy-68bf786866-vcdns 2/2 Running 0 14s 10.76.10.106 ip-10-76-9-52.eu-central-1.compute.internal <none> <none>
infra-system oauth2-proxy-68bf786866-wx62c 2/2 Running 0 14s 10.76.12.107 ip-10-76-15-164.eu-central-1.compute.internal <none> <none>
However, a simple website load usually takes around 10 seconds, compared to 2-3 seconds with the proxy annotations not being present on the secured service.但是,一个简单的网站加载通常需要大约 10 秒,而安全服务上不存在代理注释则需要 2-3 秒。
We added a proxy_cache
to the auth.domain.com
service which hosts our proxy by adding我们向auth.domain.com
服务添加了proxy_cache
,该服务通过添加来托管我们的代理
"nginx.ingress.kubernetes.io/server-snippet": <<EOF
proxy_cache auth_cache;
proxy_cache_lock on;
proxy_ignore_headers Cache-Control;
proxy_cache_valid any 30m;
add_header X-Cache-Status $upstream_cache_status;
EOF
but this didn't improve the latency either.但这也没有改善延迟。 We still see all HTTP requests triggering a log line in our proxy.我们仍然看到所有 HTTP 请求在我们的代理中触发日志行。 Oddly, only some of the requests take 5 seconds.奇怪的是,只有一些请求需要 5 秒。
We are unsure if: - the proxy forwards each request to the oauth provider (github) or - caches the authentications我们不确定是否: - 代理将每个请求转发到 oauth 提供程序 (github) 或 - 缓存身份验证
We use cookie authentication, therefore, in theory, the oauth2_proxy should just decrypt the cookie and then return a 200 to the nginx ingress.我们使用 cookie 身份验证,因此,理论上,oauth2_proxy应该只解密 cookie,然后向 nginx 入口返回 200。 Since they are both on the same node it should be fast.因为它们都在同一个节点上,所以应该很快。 But it's not.但事实并非如此。 Any ideas?有任何想法吗?
I have analyzed the situation further.我进一步分析了情况。 Visiting my auth server with https://oauth2.domain.com/auth
in the browser and copying the request copy for curl
I found that:在浏览器中使用https://oauth2.domain.com/auth
访问我的身份验证服务器并复制copy for curl
我发现:
nginx.ingress.kubernetes.io/auth-url: http://172.20.95.17/oauth2/auth
(eg setting the host == cluster IP) makes the GUI load as expected (fast)将注释设置为nginx.ingress.kubernetes.io/auth-url: http://172.20.95.17/oauth2/auth
集群 IP 设置A better fix I found was to set the annotation to the following我发现一个更好的解决方法是将注释设置为以下
nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"
The auth-url
is what the ingress queries with the cookie of the user. auth-url
是入口使用用户的 cookie 查询的内容。 Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not) Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)
Given that it's unlikely that someone comes up with the why this happens, I'll answer my workaround.鉴于不太可能有人想出为什么会发生这种情况,我会回答我的解决方法。
A fix I found was to set the annotation to the following我发现的一个修复是将注释设置为以下
nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"
The auth-url
is what the ingress queries with the cookie of the user. auth-url
是入口使用用户的 cookie 查询的内容。 Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not) Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)
In my opinion you observe the increased latency in response time in case of:在我看来,在以下情况下,您会观察到响应时间的延迟增加:
nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth"
setting due to the fact, that auth server
URL resolves to the external service (in this case VIP of Load Balancer seating in front of Ingress Controller).设置,因为auth server
URL 解析为外部服务(在这种情况下,负载均衡器的 VIP 位于入口控制器前面)。
Practically it means, that you go out with the traffic outside of the cluster (so called hairpin mode), and goes back via External IP of Ingress that routes to internal ClusterIP Service (which adds extra hops), instead going directly with ClusterIP/Service DNS name (you stay within Kubernetes cluster):实际上,这意味着,您将 go 与集群外部的流量(所谓的 发夹模式)分开,然后通过 Ingress 的外部 IP 返回,该入口路由到内部 ClusterIP 服务(这增加了额外的跃点),而不是直接使用 ClusterIP/Service DNS 名称(您留在 Kubernetes 集群内):
nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.