简体   繁体   English

在Kubernetes中加载平衡应用程序

[英]Load Balancing an application in Kubernetes

Lets say that I have two deployments which contain two instances of a backend application. 假设我有两个部署,其中包含两个后端应用程序实例。 (Instead of having one deployment with multiple replicas, as they need to be configured differently). (而不是让一个部署具有多个副本,因为它们需要以不同方式配置)。

How would you guys go about load balancing between the two? 你们怎么会在两者之间进行负载平衡? The classic approach would be to set up HAProxy with the two backends. 经典的方法是使用两个后端设置HAProxy。 Does this sound right in the context of Kubernetes? 这听起来在Kubernetes的背景下是否正确? Is there a better way to expose two deployments on a single Ingress Controller resource? 有没有更好的方法在单个Ingress Controller资源上公开两个部署?

You can define a Service that will be determined by labels selectors . 您可以定义将由标签选择器确定的服务 The requests to the service will be spread across the deployments (as the same with ingress) 对服务的请求将分布在部署中(与入口相同)

Example: 例:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-deployments
spec:
  ports:
  - port: 80
  selector:
    app: my-deployments

Ideally you should be running one deployment with multiple replicas. 理想情况下,您应该运行具有多个副本的一个部署。 Define the service object selecting the backend pods. 定义选择后端窗格的服务对象。 The service object automatically load balances the backend pods in round Robin fashion. 服务对象以循环方式自动负载平衡后端pod。

If you want to load balance multiple deployment objects then define one service each for deployment, ServiceA and serviceB. 如果要对多个部署对象进行负载平衡,则为部署,ServiceA和serviceB定义一个服务。 You should be running ha-proxy load balancing the traffic between ServiceA and serviceB. 您应该运行ha-proxy负载平衡ServiceA和serviceB之间的流量。

We recommend you opt for first approach unless you have a valid reason to consider second approach 我们建议您选择第一种方法,除非您有正当理由考虑第二种方法

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM