简体   繁体   English

K8S 裸机 nginx-ingress-controller

[英]K8S baremetal nginx-ingress-controller

OS: RHEL7 |操作系统:RHEL7 | k8s version: 1.12/13 | k8s 版本:1.12/13 | kubespray | kubespray | baremetal裸机

I have a standard kubespray bare metal cluster deployed and I am trying to understand what is the simplest recommended way to deploy nginx-ingress-controller that will allow me to deploy simple services.我部署了一个标准的 kubespray 裸机集群,我试图了解部署 nginx-ingress-controller 的最简单推荐方法是什么,这将允许我部署简单的服务。 There is no load balancer provided.没有提供负载平衡器。 I want my MASTER public IP as the endpoint for my services.我希望我的 MASTER 公共 IP 作为我的服务的端点。

抬头

Github k8s ingress-nginx suggests NodePort service as a "mandatory" step, which seems not to be enough to make it works along with kubespray's ingress_controller . Github k8s ingress-nginx 建议将NodePort 服务作为“强制性”步骤,这似乎不足以使其与 kubespray 的ingress_controller 一起工作

I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.我能够通过kubectl edit svc使其工作强制 LoadBalancer 服务类型并将 externalIP 值设置为 MASTER 公共 IP 到 nginx-ingress-controller 中,但由于缺少负载均衡器本身,这似乎不是一个正确的解决方案。

Similar results using helm chart:使用 helm chart 的类似结果:

helm install -n ingress-nginx stable/nginx-ingress --set controller.service.externalIPs[0]="MASTER PUBLIC IP"

I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.我能够通过 kubectl edit svc 使其工作强制 LoadBalancer 服务类型并将 externalIP 值设置为 MASTER 公共 IP 到 nginx-ingress-controller 中,但由于缺少负载均衡器本身,这似乎不是一个正确的解决方案。

Correct, that is not what LoadBalancer is intended for.正确,这不是LoadBalancer的目的。 It's intended for provisioning load balancers with cloud providers like AWS, GCP, or Azure, or a load balancer that has some sort of API so that the kube-controller-manager can interface with it.它旨在为 AWS、GCP 或 Azure 等云提供商或具有某种 API 的负载均衡器配置负载均衡器,以便 kube-controller-manager 可以与之交互。 If you look at your kube-controller-manager logs you should see some errors.如果您查看 kube-controller-manager 日志,您应该会看到一些错误。 The way you made it work it's obviously a hack, but I suppose it works.你让它工作的方式显然是一个黑客,但我认为它有效。

The standard way to implement this is just to use a NodePort service and have whatever proxy/load balancer (ie nginx, or haproxy) on your master to send traffic to the NodePorts.实现这一点的标准方法是使用NodePort服务,并在您的主服务器上使用任何代理/负载均衡器(即 nginx 或 haproxy)将流量发送到 NodePort。 Note that I don't recommend the master to front your services either since it already handles some of the critical Kubernetes pods like the kube-controller-manager, kube-apiserver, kube-scheduler, etc.请注意,我也不推荐 master 来处理您的服务,因为它已经处理了一些关键的 Kubernetes pod,如 kube-controller-manager、kube-apiserver、kube-scheduler 等。

The only exception is MetalLB which you can use with a LoadBalancer service type.唯一的例外是MetalLB ,您可以将其与 LoadBalancer 服务类型一起使用。 Keep in mind that as of this writing the project is in its early stages.请记住,在撰写本文时,该项目还处于早期阶段。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM