简体   繁体   English

将流量路由到kubernetes集群

[英]Routing traffic to kubernetes cluster

I have a question related to Kubernetes networking. 我有一个与Kubernetes网络相关的问题。

I have a microservice (say numcruncherpod ) running in a pod which is serving requests via port 9000 , and I have created a corresponding Service of type NodePort ( numcrunchersvc ) and node port which this service is exposed is 30900 . 我有一个运行在Pod中的微服务(例如numcruncherpod ),该Pod通过端口9000为请求提供服务,并且我创建了一个相应的类型为NodePortnumcrunchersvc )的服务,该服务公开的节点端口为30900

My cluster has 3 nodes with following IPs: 我的集群有3个具有以下IP的节点:

  1. 192.168.201.70, 192.168.201.70,
  2. 192.168.201.71 192.168.201.71
  3. 192.168.201.72 192.168.201.72

I will be routing the traffic to my cluster via reverse proxy (nginx). 我将通过反向代理(nginx)将流量路由到我的集群。 As I understand in nginx I need to specify IPs of all these cluster nodes to route the traffic to the cluster, is my understanding correct ? 据我在nginx中了解,我需要指定所有这些集群节点的IP才能将流量路由到集群,我的理解正确吗?

My worry is since nginx won't have knowledge of cluster it might not be a good judge to decide the cluster node to which the traffic should be sent to. 我担心的是,由于nginx不会了解群集,因此决定将流量发送到的群集节点可能不是一个很好的判断。 So is there a better way to route the traffic to my kubernetes cluster ? 那么是否有更好的方法将流量路由到我的kubernetes集群?

PS: I am not running the cluster on any cloud platform. PS:我没有在任何云平台上运行群集。

This answer is a little late, and a little long, so I ask for forgiveness before I begin. 这个答案有点晚,而且有点长,所以我在开始之前要求原谅。 :) :)

For people not running kubernetes clusters on Cloud Providers there are 4 distinct options for exposing services running inside the cluster to the world outside. 对于不在Cloud Providers上运行kubernetes集群的人们,有4种不同的选项可以将集群内部运行的服务暴露给外部环境。

  1. Service of type: NodePort . 服务type: NodePort This is the simplest and default. 这是最简单和默认的。 Kubernetes assigns a random port to your service. Kubernetes为您的服务分配一个随机端口。 Every node in the cluster listens for traffic to this particular port and then forwards that traffic to any one of the pods backing that service. 群集中的每个节点都侦听到此特定端口的流量,然后将该流量转发到支持该服务的任何一个Pod。 This is usually handled by kube-proxy, which leverages iptables and load balances using a round-robin strategy. 这通常由kube-proxy处理,它使用循环策略利用iptables和负载平衡。 Typically since the UX for this setup is not pretty, people often add an external "proxy" server, such as HAProxy, Nginx or httpd to listen to traffic on a single IP and forward it to one of these backends. 通常,由于此设置的UX不太美观,因此人们经常添加外部“代理”服务器,例如HAProxy,Nginx或httpd,以侦听单个IP上的流量并将其转发到这些后端之一。 This is the setup you, OP, described. 这是您OP描述的设置。

  2. A step up from this would be using a Service of type: ExternalIP . 要做到这一点,可以使用type: ExternalIP的服务。 This is identical to the NodePort service, except it also gets kubernetes to add an additional rule on all kubernetes nodes that says "All traffic that arrives for destination IP == must also be forwarded to the pods". 这与NodePort服务相同,不同之处在于它还使kubernetes在所有kubernetes节点上添加一条附加规则,该规则说“到达目标IP ==的所有流量必须转发到Pod”。 This basically allows you to specify any arbitrary IP as the "external IP" for the service. 基本上,您可以将任何任意IP指定为服务的“外部IP”。 As long as traffic destined for that IP reaches one of the nodes in the cluster, it will be routed to the correct pod. 只要发往该IP的流量到达群集中的节点之一,它将被路由到正确的Pod。 Getting that traffic to any of the nodes however, is your responsibility as the cluster administrator. 但是,作为群集管理员,您必须负责将流量发送到任何节点。 The advantage here is that you no longer have to run an haproxy/nginx setup, if you specify the IP of one of the physical interfaces of one of your nodes (for example one of your master nodes). 这样做的好处是,如果您指定一个节点(例如,一个主节点)的物理接口之一的IP,则不再需要运行haproxy / nginx安装程序。 Additionally you cut down the number of hops by one. 此外,您将跳数减少了一个。

  3. Service of type: LoadBalancer . 服务type: LoadBalancer This service type brings baremetal clusters at parity with cloud providers. 这种服务类型使裸机集群与云提供商处于同等地位。 A fully functioning loadbalancer provider is able to select IP from a pre-defined pool, automatically assign it to your service and advertise it to the network, assuming it is configured correctly. 一个功能齐全的负载平衡器提供程序可以从预定义的池中选择IP,自动将其分配给您的服务,并在正确配置的情况下将其发布到网络。 This is the most "seamless" experience you'll have when it comes to kubernetes networking on baremetal. 对于裸机上的kubernetes网络,这是您将获得的最“无缝”的体验。 Most of LoadBalancer provider implementations use BGP to talk and advertise to an upstream L3 router. 大多数LoadBalancer提供程序实现都使用BGP与上游L3路由器进行通话和通告。 Metallb and kube-router are the two FOSS projects that fit this niche. Metallb和kube-router是适合此领域的两个FOSS项目。

  4. Kubernetes Ingress. Kubernetes入口。 If your requirement is limited to L7 applications, such as REST APIs, HTTP microservices etc. You can setup a single Ingress provider (nginx is one such provider) and then configure ingress resources for all your microservices, instead of service resources. 如果您的需求仅限于REST API,HTTP微服务等L7应用程序,则可以设置单个Ingress提供程序(nginx是这样的提供程序之一),然后为所有微服务配置Ingress资源,而不是服务资源。 You deploy your ingress provider and make sure it has an externally available and routable IP (you can pin it to a master node, and use the physical interface IP for that node for example). 部署入口提供程序,并确保它具有外部可用的可路由IP(您可以将其固定到主节点,并为该节点使用物理接口IP)。 The advantage of using ingress over services is that ingress objects understand HTTP mircoservices natively and you can do smarter health checking, routing and management. 与服务相比,使用Ingress的优势在于Ingress对象可以原生理解HTTP mircoservices,并且可以执行更智能的运行状况检查,路由和管理。

Often people combine one of (1), (2), (3) with (4), since the first 3 are L4 (TCP/UDP) and (4) is L7. 人们通常将(1),(2),(3)中的一个与(4)组合在一起,因为前三个是L4(TCP / UDP),而(4)是L7。 So things like URL path/Domain based routing, SSL Termination etc is handled by the ingress provider and the IP lifecycle management and routing is taken care of by the service layer. 因此,诸如URL路径/域路由,SSL终止等之类的事情都由入口提供商处理,而IP生命周期管理和路由则由服务层负责。

For your use case, the ideal setup would involve: 对于您的用例,理想的设置将涉及:

  1. A deployment for your microservice, with health endpoints on your pod 微服务的部署,在您的Pod上具有运行状况端点
  2. An Ingress provider, so that you can tweak/customize your routing/load-balancing as well as use for SSL termination, domain matching etc. 一个Ingress提供程序,以便您可以调整/自定义路由/负载平衡以及用于SSL终止,域匹配等。
  3. (optional): Use a LoadBalancer provider to front your Ingress provider, so that you don't have to manually configure your Ingress's networking. (可选):使用LoadBalancer提供程序在Ingress提供程序的前面,这样您就不必手动配置Ingress的网络。

Correct. 正确。 You can route traffic to any or all of the K8 minions. 您可以将流量路由到任何或所有K8奴才。 The K8 network layer will forward to the appropriate minion if necessary. 必要时,K8网络层将转发到适当的仆从。

If you are running only a single pod for example, nginx will most likely round-robin the requests. 例如,如果仅运行一个Pod,nginx将很可能循环请求。 When the requests hit a minion which does not have the pod running on it, the request will be forwarded to the minion that does have the pod running. 当请求到达未运行Pod的小仆时,请求将被转发到未运行Pod的小仆。

If you run 3 pods, one on each minion, the request will be handled by whatever minion gets the request from nginx. 如果您运行3个Pod,每个爪牙一个,那么无论是爪牙还是从Nginx获取请求,该请求都会被处理。

If you run more than one pod on each minion, the requests will be round-robin to each minion, and then round-robin to each pod on that minion. 如果您在每个仆从上运行多个Pod,则请求将轮循到每个仆从,然后轮循到该仆从上的每个Pod。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM