简体   繁体   English

如何从裸机集群公开kubernetes服务

[英]How to expose kubernetes service from bare metal cluster

I run a kubernetes cluster on a 'bare metal' Ubuntu machine, as described here http://kubernetes.io/docs/getting-started-guides/ubuntu/ . 我在“裸机” Ubuntu机器上运行kubernetes集群,如http://kubernetes.io/docs/getting-started-guides/ubuntu/所述 After I create a LoadBalancer service, I can see on which ip address it runs: 创建LoadBalancer服务后,可以看到它在哪个IP地址上运行:

kubectl describe services sonar
Name:           sonar
IP:             10.0.0.170
Port:           <unset> 9000/TCP
Endpoints:      172.17.0.2:9000
. . .  

Then I expose this to the world with nginx running outside of the kubernetes cluster. 然后,我通过在kubernetes集群之外运行的nginx将其公开。 Great, but on the next service deployment the ip changes. 很好,但是在下一个服务部署中,ip会更改。 How can I deal with this? 我该如何处理? Fix the ip, use environment vars, any other way? 修复ip,使用环境变量,还有其他方法吗?

Without having seen your service definition it sounds to me like you want a NodePort type of service rather than a LoadBalancer . 在没有看到您的服务定义的情况下,听起来像您想要的是NodePort类型的服务,而不是LoadBalancer With a NodePort service you would simply point NGINX to the IP address of the Ubuntu machine and the port specified in the service definition. 使用NodePort服务,您只需将NGINX指向Ubuntu计算机的IP地址和服务定义中指定的端口即可。 As long as the address of the Ubuntu machine is stable you should be fine. 只要Ubuntu机器的地址稳定,就可以了。

If you run Kubernetes on multiple machines you simply add the IP addresses of all machines to your NGINX machine and let it do the load balancing. 如果您在多台机器上运行Kubernetes,只需将所有机器的IP地址添加到NGINX机器上,然后让它进行负载平衡。

More information about the different service types is available here: http://kubernetes.io/docs/user-guide/services/#publishing-services---service-types 有关不同服务类型的更多信息,请参见: http : //kubernetes.io/docs/user-guide/services/#publishing-services---service-types

Disclaimer : I work for Stackpoint, after studying different choices we decided to use ingress controllers for our product so my answer is biased to ingresses. 免责声明 :我为Stackpoint工作,在研究了不同的选择之后,我们决定为产品使用入口控制器,因此我的回答偏向于入口。

With ingress + ingress Controller you can balance external loads to the pods endpoints. 使用ingress + ingress Controller您可以平衡外部负载到Pod端点。 While services are resources whose main target is to track pods and create routes (among other things), ingress is a much better way of defining balancing rules. 虽然服务是资源,其主要目标是跟踪吊舱和创建路线(除其他外),但进入是定义平衡规则的一种更好的方法。 By now it: 到现在为止:

  • Supports host names 支持主机名
  • Supports TLS specification using secrets 支持使用机密的TLS规范
  • Can route based on paths 可以根据路径进行路由
  • Can define default backends 可以定义默认后端

The big disadvantage with ingress is that you need an ingress controller that listen for Ingress, resolve endpoints, communicate config changes to the balancer and reload if necessary. 入口的最大缺点是,您需要一个入口控制器来侦听入口,解析端点,将配置更改传达给平衡器并在必要时重新加载。 Since we are in control of what the Ingress will tell the balancer, we can configure keepalives, sticky sessions, healthchecks, ... etc. 由于我们可以控制Ingress告诉平衡器的内容,因此我们可以配置keepalive,粘性会话,运行状况检查等。

Using services you are not in full control of all those parameters. 使用服务,您无法完全控制所有这些参数。

There is an nginx example at kubernetes/contrib that should match most scenarios. 在kubernetes / contrib上有一个nginx示例 ,它可以匹配大多数情况。 At Stackpoint we are using our own haproxy Ingress controller and are quite happy with the results (and will have Ingress management from our UI in short) 在Stackpoint,我们使用了自己的haproxy Ingress控制器,并对结果感到非常满意(并且将通过UI进行Ingress管理)

The ingress kubernetes page contains more info, and at the bottom, a section with some links to the alternatives. Ingress kubernetes页面包含更多信息,而底部则是包含一些替代方法链接的部分。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM