[英]ingress-nginx not working when using ingressClassName instead of kubernetes.io/ingress.class in annotations
I have a baremetal cluster deployed using Kubespray with kubernetes 1.22.2, MetalLB, and ingress-nginx enabled.我有一个使用 Kubespray 部署的裸机集群,启用了 kubernetes 1.22.2、MetalLB 和 ingress-nginx。 I am getting "404 Not found" when trying to access any service deployed via helm when setting
ingressClassName: nginx
.在设置
ingressClassName: nginx
时尝试访问通过 helm 部署的任何服务时,我收到“404 Not found”。 However, everything works fine when I don't use ingressClassName: nginx
and use kubernetes.io/ingress.class: nginx
in the annotations instead in the helm chart values.yaml.但是,当我不使用
ingressClassName: nginx
并在注释中使用kubernetes.io/ingress.class: nginx
而不是掌舵图表 values.yaml 时,一切正常。 How can I get it to work using ingressClassName
?如何使用
ingressClassName
让它工作?
These are my kubespray settings for inventory/mycluster/group_vars/k8s_cluster/addons.yml
这些是我对
inventory/mycluster/group_vars/k8s_cluster/addons.yml
/mycluster/group_vars/k8s_cluster/addons.yml 的 kubespray 设置
# Nginx ingress controller deployment
ingress_nginx_enabled: true
ingress_nginx_host_network: false
ingress_publish_status_address: ""
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
ingress_nginx_tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
effect: "NoSchedule"
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_insecure_port: 80
ingress_nginx_secure_port: 443
ingress_nginx_configmap:
map-hash-bucket-size: "128"
ssl-protocols: "TLSv1.2 TLSv1.3"
ingress_nginx_configmap_tcp_services:
9000: "default/example-go:8080"
ingress_nginx_configmap_udp_services:
53: "kube-system/coredns:53"
ingress_nginx_extra_args:
- --default-ssl-certificate=default/mywildcard-tls
ingress_nginx_class: "nginx"
grafana helm values.yaml grafana helm values.yaml
ingress:
enabled: true
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
ingressClassName: nginx
# Values can be templated
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
# pathType is only for k8s >= 1.1=
pathType: Prefix
hosts:
- grafana.mycluster.org
tls:
- secretName: mywildcard-tls
hosts:
- grafana.mycluster.org
kubectl describe pod grafana-679bbfd94-p2dd7
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/grafana-679bbfd94-p2dd7 to node1
Normal Pulled 25m kubelet Container image "grafana/grafana:8.2.2" already present on machine
Normal Created 25m kubelet Created container grafana
Normal Started 25m kubelet Started container grafana
Warning Unhealthy 24m (x3 over 25m) kubelet Readiness probe failed: Get "http://10.233.90.33:3000/api/health": dial tcp 10.233.90.33:3000: connect: connection refused
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana LoadBalancer 10.233.14.90 10.10.30.52 80:30285/TCP 55m
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 9d
kubectl get ing
(no node address assigned) kubectl get ing
(未分配节点地址)
NAME CLASS HOSTS ADDRESS PORTS AGE
grafana nginx grafana.mycluster.org 80, 443 25m
kubectl describe ing grafana
(no node address assigned) kubectl describe ing grafana
(未分配节点地址)
Name: grafana
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
mywildcard-tls terminates grafana.mycluster.org
Rules:
Host Path Backends
---- ---- --------
grafana.mycluster.org
/ grafana:80 (10.233.90.33:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/grafana-b988b9b6-pxccw 1/1 Running 0 2m53s
default pod/nfs-client-nfs-subdir-external-provisioner-68f44cd9f4-wjlpv 1/1 Running 0 17h
ingress-nginx pod/ingress-nginx-controller-6m2vt 1/1 Running 0 17h
ingress-nginx pod/ingress-nginx-controller-xkgxl 1/1 Running 0 17h
kube-system pod/calico-kube-controllers-684bcfdc59-kmsst 1/1 Running 0 17h
kube-system pod/calico-node-dhlnt 1/1 Running 0 17h
kube-system pod/calico-node-r8ktz 1/1 Running 0 17h
kube-system pod/coredns-8474476ff8-9sbwh 1/1 Running 0 17h
kube-system pod/coredns-8474476ff8-fdgcb 1/1 Running 0 17h
kube-system pod/dns-autoscaler-5ffdc7f89d-vskvq 1/1 Running 0 17h
kube-system pod/kube-apiserver-node1 1/1 Running 0 17h
kube-system pod/kube-controller-manager-node1 1/1 Running 1 17h
kube-system pod/kube-proxy-hbjz6 1/1 Running 0 16h
kube-system pod/kube-proxy-lfqzt 1/1 Running 0 16h
kube-system pod/kube-scheduler-node1 1/1 Running 1 17h
kube-system pod/kubernetes-dashboard-548847967d-qqngw 1/1 Running 0 17h
kube-system pod/kubernetes-metrics-scraper-6d49f96c97-2h7hc 1/1 Running 0 17h
kube-system pod/nginx-proxy-node2 1/1 Running 0 17h
kube-system pod/nodelocaldns-64cqs 1/1 Running 0 17h
kube-system pod/nodelocaldns-t5vv6 1/1 Running 0 17h
kube-system pod/registry-proxy-kljvw 1/1 Running 0 17h
kube-system pod/registry-proxy-nz4qk 1/1 Running 0 17h
kube-system pod/registry-xzh9d 1/1 Running 0 17h
metallb-system pod/controller-77c44876d-c92lb 1/1 Running 0 17h
metallb-system pod/speaker-fkjqp 1/1 Running 0 17h
metallb-system pod/speaker-pqjgt 1/1 Running 0 17h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/grafana LoadBalancer 10.233.1.104 10.10.30.52 80:31116/TCP 2m53s
default service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 17h
kube-system service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 17h
kube-system service/dashboard-metrics-scraper ClusterIP 10.233.35.124 <none> 8000/TCP 17h
kube-system service/kubernetes-dashboard ClusterIP 10.233.32.133 <none> 443/TCP 17h
kube-system service/registry ClusterIP 10.233.30.221 <none> 5000/TCP 17h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ingress-nginx daemonset.apps/ingress-nginx-controller 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/nodelocaldns 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/registry-proxy 2 2 2 2 2 <none> 17h
metallb-system daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/grafana 1/1 1 1 2m53s
default deployment.apps/nfs-client-nfs-subdir-external-provisioner 1/1 1 1 17h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 17h
kube-system deployment.apps/coredns 2/2 2 2 17h
kube-system deployment.apps/dns-autoscaler 1/1 1 1 17h
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 17h
kube-system deployment.apps/kubernetes-metrics-scraper 1/1 1 1 17h
metallb-system deployment.apps/controller 1/1 1 1 17h
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/grafana-b988b9b6 1 1 1 2m53s
default replicaset.apps/nfs-client-nfs-subdir-external-provisioner-68f44cd9f4 1 1 1 17h
kube-system replicaset.apps/calico-kube-controllers-684bcfdc59 1 1 1 17h
kube-system replicaset.apps/coredns-8474476ff8 2 2 2 17h
kube-system replicaset.apps/dns-autoscaler-5ffdc7f89d 1 1 1 17h
kube-system replicaset.apps/kubernetes-dashboard-548847967d 1 1 1 17h
kube-system replicaset.apps/kubernetes-metrics-scraper-6d49f96c97 1 1 1 17h
kube-system replicaset.apps/registry 1 1 1 17h
metallb-system replicaset.apps/controller-77c44876d 1 1 1 17h
kubectl get ing grafana -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
creationTimestamp: "2021-11-11T07:16:12Z"
generation: 1
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 8.2.2
helm.sh/chart: grafana-6.17.5
name: grafana
namespace: default
resourceVersion: "3137"
uid: 6c34d3bd-9ab6-42fe-ac1b-7620a9566f62
spec:
ingressClassName: nginx
rules:
- host: grafana.mycluster.org
http:
paths:
- backend:
service:
name: ssl-redirect
port:
name: use-annotation
path: /*
pathType: Prefix
- backend:
service:
name: grafana
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}
Running
kubectl get ingressclass
returned 'No resources found'.运行
kubectl get ingressclass
返回“未找到资源”。
That's the main reason of your issue.这是你问题的主要原因。
Why?为什么?
When you are specifying ingressClassName: nginx
in your Grafana values.yaml
file you are setting your Ingress resource to use nginx
Ingress class which does not exist.当您在 Grafana
values.yaml
文件中指定ingressClassName: nginx
,您将 Ingress 资源设置为使用不存在的nginx
Ingress 类。
I replicated your issue using minikube , MetalLB and NGINX Ingress installed via modified deploy.yaml file with commented IngressClass
resource + set NGINX Ingress controller name to nginx
as in your example.我使用minikube 、 MetalLB和NGINX Ingress复制了您的问题,这些文件通过修改后的deploy.yaml 文件安装,并带有注释的
IngressClass
资源 + 将 NGINX Ingress 控制器名称设置为nginx
如您的示例所示。 The result was exactly the same - ingressClassName: nginx
didn't work (no address), but annotation kubernetes.io/ingress.class: nginx
worked.结果完全一样 -
ingressClassName: nginx
没有工作(没有地址),但注释kubernetes.io/ingress.class: nginx
工作。
(For the below solution I'm using controller pod name ingress-nginx-controller-86c865f5c4-qwl2b
, but in your case it will be different - check it using kubectl get pods -n ingress-nginx
command. Also keep in mind it's kind of a workaround - usually ingressClass
resource should be installed automatically with a whole installation of NGINX Ingress. I'm presenting this solution to understand why it's not worked for you before, and why it works with NGINX Ingress installed using helm) (对于下面的解决方案,我使用控制器 pod 名称
ingress-nginx-controller-86c865f5c4-qwl2b
,但在您的情况下它会有所不同 - 使用kubectl get pods -n ingress-nginx
命令检查它。另外请记住它的种类一种解决方法 - 通常ingressClass
资源应该与 NGINX Ingress 的整个安装一起自动安装。我提出这个解决方案是为了了解为什么它以前对你不起作用,以及为什么它可以与使用 helm 安装的 NGINX Ingress 一起使用)
In the logs of the Ingress NGINX controller I found ( kubectl logs ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx
):在我发现的 Ingress NGINX 控制器的日志中(
kubectl logs ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx
):
"Ignoring ingress because of error while validating ingress class" ingress="default/minimal-ingress" error="no object matching key \"nginx\" in local store"
So it's clearly shown that there is no matching key to nginx
controller class - because there is no ingressClass
resource which is the "link" between the NGINX Ingress controller and running Ingress resource.所以它清楚地表明没有与
nginx
控制器类匹配的键 - 因为没有ingressClass
资源,它是 NGINX Ingress 控制器和正在运行的 Ingress 资源之间的“链接”。
You can which name of controller class is bidden to controller by running kubectl get pod ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx -o yaml
:您可以通过运行
kubectl get pod ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx -o yaml
来将控制器类的名称提供给控制器:
...
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/nginx
...
Now I will create and apply following Ingress class resource:现在我将创建并应用以下 Ingress 类资源:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/nginx
Now in the logs I can see that it's properly configured:现在在日志中我可以看到它已正确配置:
I1115 12:13:42.410384 7 main.go:101] "successfully validated configuration, accepting" ingress="minimal-ingress/default"
I1115 12:13:42.420408 7 store.go:371] "Found valid IngressClass" ingress="default/minimal-ingress" ingressclass="nginx"
I1115 12:13:42.421487 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"minimal-ingress", UID:"c708a672-a8dd-45d3-a2ec-f2e2881623ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"454362", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I re-applied the ingress resource definition, I get IP address for Ingress resource.我重新应用了入口资源定义,我获得了入口资源的 IP 地址。
As I said before, instead of using this workaround, I'd suggest installing the NGINX Ingress resource using a solution that automatically installs IngressClass
as well.正如我之前所说,我建议不要使用这种解决方法,而是建议使用自动安装
IngressClass
的解决方案来安装 NGINX Ingress 资源。 As you have chosen helm chart, it has Ingress Class resource so the problem is gone.当您选择 helm chart 时,它具有Ingress Class资源,因此问题消失了。 Other possible ways to install are here .
其他可能的安装方法在这里。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.