简体   繁体   English

Kubernetes使用自定义算法扩展Pod

[英]Kubernetes scaling pods using custom algorithm

Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web and Mongo. 我们的云应用程序包含3个紧密耦合的Docker容器:Nginx,Web和Mongo。 Currently we run these containers on a single machine. 当前,我们在单个计算机上运行这些容器。 However as our users are increasing we are looking for a solution to scale. 但是,随着用户数量的增加,我们正在寻找可扩展的解决方案。 Using Kubernetes we would form a multi container pod. 使用Kubernetes,我们将形成一个多容器容器。 If we are to replicate we need to replicate all 3 containers as a unit. 如果要复制,则需要将所有3个容器复制为一个单元。 Our cloud application is consumed by mobile app users. 我们的云应用程序被移动应用程序用户使用。 Our app can only handle approx 30000 users per Worker node and we intend to place a single pod on a single worker node. 我们的应用程序每个工人节点只能处理大约30000个用户,我们打算在单个工人节点上放置一个Pod。 Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address ) 一旦移动设备连接到工作节点,它就必须继续仅使用该机器(唯一的IP地址)

We plan on using Kubernetes to manage the containers. 我们计划使用Kubernetes管理容器。 Load balancing doesn't work for our use case as a mobile device needs to be tied to a single machine once assigned and each Pod works independently with its own persistent volume. 负载平衡不适用于我们的用例,因为一旦分配了移动设备就需要将其绑定到一台机器上,并且每个Pod均以其自己的持久卷独立工作。 However we need a way of spinning up new Pods on worker nodes if the number of users goes over 30000 and so on. 但是,如果用户数量超过30000,我们需要一种在工作节点上旋转新Pod的方法,依此类推。

The idea is we have some sort of custom scheduler which assigns a mobile device a Worker Node ( domain/ IPaddress) depending on the number of users on that node. 我们的想法是,我们有某种自定义调度程序,可根据该节点上的用户数量为移动设备分配一个工作节点(域/ IP地址)。

Is Kubernetes a good fit for this design and how could we implement a custom pod scale algorithm. Kubernetes是否非常适合此设计,以及如何实现自定义的pod scale算法。

Thanks 谢谢

Piggy-Backing on the answer of Jonah Benton: 对乔纳·本顿的答案进行小抄:

While this is technically possible - your problem is not with Kubernetes it's with your Application! 尽管这在技术上是可能的-您的问题不是Kubernetes而是应用程序! Let me point you the problem: 让我指出你的问题:

Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web, and Mongo. 我们的云应用程序包含3个紧密耦合的Docker容器,Nginx,Web和Mongo。

Here is your first problem: Is you can only deploy these three containers together and not independently - you cannot scale one or the other! 这是您的第一个问题:是您只能将这三个容器一起部署,而不能独立部署吗?您无法扩展一个或另一个! While MongoDB can be scaled to insane loads - if it's bundled with your web server and web application it won't be able to... 虽然MongoDB可以扩展到疯狂的负载-如果将它与Web服务器和Web应用程序捆绑在一起,它将无法...

So the first step for you is to break up these three components so they can be managed independently of each other. 因此,您的第一步是分解这三个组件,以便可以彼此独立地进行管理。 Next: 下一个:

Currently we run these containers on a single machine. 当前,我们在单个计算机上运行这些容器。

While not strictly a problem - I have serious doubt's what it would mean to scale your application and what the challenges that come with scalability! 尽管不是严格的问题,但我非常怀疑扩展您的应用程序意味着什么以及可伸缩性带来的挑战!

Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address ) 一旦移动设备连接到工作节点,它就必须继续仅使用该机器(唯一的IP地址)

Now, this IS a problem. 现在,这是一个问题。 You're looking to run an application on Kubernetes but I do not think you understand the consequences of doing that: Kubernetes orchestrates your resources. 您正在寻找在Kubernetes上运行应用程序的方法,但我认为您不理解这样做的后果:Kubernetes协调您的资源。 This means it will move pods (by killing and recreating) between nodes (and if necessary to the same node). 这意味着它将在节点之间(必要时移至同一节点)(通过杀死和重新创建)移动容器。 It does this fully autonomous (which is awesome and gives you a good night sleep) If you're relying on clients sticking to a single nodes IP, you're going to get up in the middle of the night because Kubernetes tried to correct for a node failure and moved your pod which is now gone and your users can't connect anymore. 它实现了这种完全自主的功能(这很棒,并且可以让您睡个好觉)。如果您依赖客户端坚持使用单个节点IP,那么您将在半夜起床,因为Kubernetes尝试纠正节点发生故障,移动了您的Pod,该Pod现在不见了,用户无法连接。 You need to leverage the load-balancing features (services) in Kubernetes. 您需要利用Kubernetes中的负载平衡功能(服务)。 Only they are able to handle the dynamic changes that happen in Kubernetes clusters. 只有他们能够处理Kubernetes集群中发生的动态变化。

Using Kubernetes we would form a multi container pod. 使用Kubernetes,我们将形成一个多容器容器。

And we have another winner - No! 我们还有另一个赢家-不! You're trying to treat Kubernetes as if it were your on-premise infrastructure! 您正在尝试将Kubernetes视为您的本地基础架构! If you keep doing so you're going to fail and curse Kubernetes in the process! 如果继续这样做,您将失败并诅咒Kubernetes!

Now that I told you some of the things you're thinking wrong - what a person would I be if I did not offer some advice on how to make this work: 现在,我告诉您一些您在想错的事情-如果我不提供有关如何进行这项工作的建议,那么我将成为一个什么样的人:

In Kubernetes your three applications should not run in one pod! 在Kubernetes中,您的三个应用程序不应在一个Pod中运行! They should run in separate pods: 它们应在单独的Pod中运行:

Feel free to ask if you have any more questions! 随时问您还有其他问题!

Building a custom scheduler and running multiple schedulers at the same time is supported: 支持构建自定义调度程序并同时运行多个调度程序:

https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/

That said, to the question of whether kubernetes is a good fit for this design- my answer is: not really. 就是说,对于kubernetes是否适合这种设计的问题,我的回答是:并非如此。

K8s can be difficult to operate, with the payoff being the level of automation and resiliency that it provides out of the box for whole classes of workloads. K8可能难以操作,其收益在于它为整个类别的工作负载提供了开箱即用的自动化和弹性水平。

This workload is not one of those. 此工作负载不是其中之一。 In order to gain any benefit you would have to write a scheduler to handle the edge failure and error cases this application has (what happens when you lose a node for a short period of time...) in a way that makes sense for k8s. 为了获得任何好处,您必须编写一种调度程序来处理此应用程序遇到的边缘故障和错误情况(短时间内丢失节点会发生什么情况……),这对于k8s来说是有意义的。 And you would have to come up to speed with normal k8s operations. 而且,您必须加快正常的k8s操作速度。

With the information provided, hard pressed to see why one would use k8s for this workload over just running docker on some VMs and scripting some of the automation. 凭借提供的信息,很难理解为什么仅在某些VM上运行docker并编写一些自动化脚本,而不是使用k8来处理这种工作负载。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM