[英]NGINX Ingress controller returning 502 with no logs in backend application pod
我已經在我的 kube.netes 集群(所有 vagrant 個虛擬機)上部署了ECK 。 集群具有以下配置。
NAME STATUS ROLES AGE VERSION
kmaster1 Ready control-plane,master 27d v1.21.1
kworker1 Ready <none> 27d v1.21.1
kworker2 Ready <none> 27d v1.21.1
我還使用 HAProxy 設置了一個負載均衡器。 負載均衡器配置如下(創建了我自己的私人證書)
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend https_front
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
stats uri /haproxy?stats
default_backend https_back
backend http_back
balance roundrobin
server kworker1 172.16.16.201:31953
server kworker2 172.16.16.202:31953
backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none
server kworker2 172.16.16.202:31503 check-ssl ssl verify none
I have also deployed an nginx ingress controller and 31953 is the http port of the nginx controller 31503 is the https port of nginx controller
nginx-ingress nginx-ingress-controller-service NodePort 10.103.189.197 <none> 80:31953/TCP,443:31503/TCP 8d app=nginx-ingress
我正在嘗試使 kibana 儀表板在 https 上的集群外部可用。它工作正常,我可以在集群內訪問它。 但是我無法通過負載均衡器訪問它。
Kibana 吊艙:
default quickstart-kb-f74c666b9-nnn27 1/1 Running 4 27d 192.168.41.145 kworker1 <none> <none>
我已將負載均衡器映射到主機
172.16.16.100 elastic.kubekluster.com
對https://elastic.kubekluster.com的任何請求都會導致以下錯誤(來自 nginx ingress controller pod 的日志)
10.0.2.15 - - [20/Jun/2021:17:38:14 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/06/20 17:38:14 [error] 178#178: *566 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.15, server: elastic.kubekluster.com, request: "GET / H
TTP/1.1", upstream: "http://192.168.41.145:5601/", host: "elastic.kubekluster.com"
HAproxy 日志如下
Jun 20 18:11:45 loadbalancer haproxy[18285]: 172.16.16.1:48662 [20/Jun/2021:18:11:45.782] https_front~ https_back/kworker2 0/0/0/4/4 502 294 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
入口如下
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubekluster-elastic-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/default-backend: quickstart-kb-http
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-body-size: 20m
spec:
tls:
- hosts:
- elastic.kubekluster.com
rules:
- host: elastic.kubekluster.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
我認為請求沒有到達 kibana pod,因為我在 pod 中沒有看到任何日志。 另外我不明白為什么 Haproxy 將請求發送為 HTTP 而不是 HTTPS。你能指出我的配置有什么問題嗎?
我希望這會有所幫助......這是我使用 nginx 設置“LoadBalancer”並將流量轉發到 HTTPS 服務的方法:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
asd-master-1 Ready master 72d v1.19.8 192.168.1.163 213.95.154.199 Ubuntu 20.04.2 LTS 5.8.0-45-generic docker://20.10.6
asd-node-1 Ready <none> 72d v1.19.8 192.168.1.101 <none> Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15
asd-node-2 Ready <none> 72d v1.19.8 192.168.0.5 <none> Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15
asd-node-3 Ready <none> 15d v1.19.8 192.168.2.190 <none> Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15
這是 nginx 的服務:
# kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.101.161.113 <none> 80:30337/TCP,443:31996/TCP 72d
這是 LoadBalancer 配置:
# cat /etc/nginx/nginx.conf
... trimmed ...
stream {
upstream nginx_http {
least_conn;
server asd-master-1:30337 max_fails=3 fail_timeout=5s;
server asd-node-1:30337 max_fails=3 fail_timeout=5s;
server asd-node-2:30337 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass nginx_http;
proxy_protocol on;
}
upstream nginx_https {
least_conn;
server 192.168.1.163:31996 max_fails=3 fail_timeout=5s;
server 192.168.1.101:31996 max_fails=3 fail_timeout=5s;
server 192.168.0.5:31996 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass nginx_https;
proxy_protocol on;
}
}
相關部分是我正在發送代理協議。 您需要配置 nginx 入口(在配置映射中)以接受這一點,並且可能將正確的語法添加到 haproxy 配置中。
這可能類似於:
backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none send-proxy-v2
server kworker2 172.16.16.202:31503 check-ssl ssl verify none send-proxy-v2
Nginx Ingress 配置應該是:
# kubectl get configmap -n ingress-nginx nginx-configuration -o yaml
apiVersion: v1
data:
use-proxy-protocol: "true"
kind: ConfigMap
metadata:
...
我希望這能讓你走上正軌。
從 @oz123 的回答中得到啟發,我對其進行了更多分析,最終能夠通過以下配置實現它。
負載均衡器配置(HAProxy)
通過在 Vagrantfile 中配置它,使用 bridged.network 公開 LB。 在 Haproxy 中啟用 TLS 直通。
frontend kubernetes-frontend
bind 192.168.1.23:6443
mode tcp
option tcplog
default_backend kubernetes-backend
backend kubernetes-backend
mode tcp
option tcp-check
balance roundrobin
server kmaster1 172.16.16.101:6443 check fall 3 rise 2
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend https_front
mode tcp
bind *:443
#ssl crt /etc/ssl/private/mydomain.pem
stats uri /haproxy?stats
default_backend https_back
backend http_back
balance roundrobin
server kworker1 172.16.16.201:32502
server kworker2 172.16.16.202:32502
backend https_back
mode tcp
balance roundrobin
server kworker1 172.16.16.201:31012
server kworker2 172.16.16.202:31012
入口 Controller
創建了一個 Nodeport ingress controller 服務並通過這個 controller 公開了所有內部服務(例如 kibana)。除了 ingress-controller 之外的所有其他服務都是 ClusterIP。
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.1.1
helm.sh/chart: ingress-nginx-4.0.15
name: ingress-nginx-controller
namespace: ingress-nginx
resourceVersion: "8198"
uid: 245a554f-56a8-4bc4-a3dd-19ffc9116a08
spec:
clusterIP: 10.105.43.200
clusterIPs:
- 10.105.43.200
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
nodePort: 32502
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
nodePort: 31012
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Kibana 的入口資源
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
generation: 1
name: ingress-kibana
namespace: default
spec:
rules:
- host: kibana.kubekluster.com
http:
paths:
- backend:
service:
name: quickstart-kb-http
port:
number: 5601
path: /
pathType: Prefix
tls:
- secretName: quickstart-kb-http-certs-public
最后在 /etc/hosts 和 map LB ip 中創建一個條目到子域並訪問 kibana 控制台,如
https://kibana.kubekluster.com
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.