[英]Kubernetes: Unable to configure Nginx Ingress to access an internal service
我正在按照教程访问在服务后面的 Kube.netes集群内运行的pod 。 此 Kube.netes 集群在 Windows 10 上运行,使用桌面 Docker(通过启用 Kube.netes 选项)
我无法使用此https://local.ticket.dev/api/users/currentuser访问它,它总是说“无法访问站点:local.ticket.dev 意外关闭了连接。”
我已禁用重定向,但它仍将HTTP重定向到HTTPs
Request URL: http://local.ticket.dev/api/users/currentuser
Request Method: GET
Status Code: 307 Internal Redirect
Referrer Policy: strict-origin-when-cross-origin
Location: https://local.ticket.dev/api/users/currentuser
Non-Authoritative-Reason: HSTS
这是视觉上我想要的
kubectl 获取
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> local.ticket.dev 80 29s
kubectl get services请注意它在本地机器 windows 10 和 Docker 桌面上运行。 并且 LoadBalancer外部IP 即使在 6 小时后也始终保持挂起状态
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.254.94 <none> 3000/TCP 45s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h17m
nginx-ingress-1629401528-controller LoadBalancer 10.110.199.210 <pending> 80:31430/TCP,443:32346/TCP 5h13m
nginx-ingress-1629401528-default-backend ClusterIP 10.108.79.252 <none> 80/TCP 5h13m
kubectl 获取 pod
NAME READY STATUS RESTARTS AGE
auth-depl-c98cdf66f-txqxt 1/1 Running 0 54s
nginx-ingress-1629401528-controller-569576ddbd-2htxz 1/1 Running 0 5h13m
nginx-ingress-1629401528-default-backend-69c7fc6549-xxf8w 1/1 Running 0 5h13m
我的配置方式如下
1 - 通过以下命令安装 Nginx
helm install stable/nginx-ingress --generate-name
2 - 脚手架开发
Listing files to watch...
- billo/ticket_auth
Generating tags...
- billo/ticket_auth -> billo/ticket_auth:latest
Some taggers failed. Rerun with -vdebug for errors.
Checking cache...
- billo/ticket_auth: Found Locally
Starting test...
Tags used in deployment:
- billo/ticket_auth -> billo/ticket_auth:d869228....
Starting deploy...
- deployment.apps/auth-depl created
- service/auth-srv created
- ingress.networking.k8s.io/ingress-service created
Waiting for deployments to stabilize...
- deployment/auth-depl is ready.
Deployments stabilized in 2.302 seconds
Waiting for deployments to stabilize...
Deployments stabilized in 6.9904ms
Press Ctrl+C to exit
Watching for changes...
[auth]
[auth] > auth@1.0.0 start
[auth] > ts-node-dev --poll src/index.ts
[auth]
[auth] [INFO] 00:59:23 ts-node-dev ver. 1.1.8 (using ts-node ver. 9.1.1, typescript ver. 4.3.5)
[auth] Auth!!!! listen to 3000 port
如果我查看最后一行,我的 Auth Pod 似乎正在 3000 端口上运行。
auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: billo/ticket_auth
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
入口-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: local.ticket.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Host文件中的配置
# Added by Docker Desktop
127.0.0.1 host.docker.internal
127.0.0.1 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 ingress.local
127.0.0.1 local.ticket.dev
首先首先禁用 HTTPS 重定向
nginx.ingress.kubernetes.io/ssl-redirect: "false"
在入口添加注解
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: local.ticket.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
您是否为 Nginx controller svc 获取了外部 IP? 当您在本地系统上时,它显示为待处理。
您可能还需要将条目添加到主机文件中
手动将入口的主机名添加到/etc/hosts :
127.0.0.1 ingress.local
127.0.0.1 local.ticket.dev
OR
Host IP local.ticket.dev
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.