简体   繁体   English

使用Docker扩展微服务

[英]Scaling microservices using Docker

I've created a Node.js (Meteor) application and I'm looking at strategies to handle scaling in the future. 我已经创建了一个Node.js(Meteor)应用程序,我正在研究未来处理扩展的策略。 I've designed my application as a set of microservices, and I'm now considering implementing this in production. 我将我的应用程序设计为一组微服务,我现在正在考虑在生产中实现它。

What I'd like to do however is have many microservices running on one server instance to maximise resource usage whilst they are using a small number of resources. 然而,我想要做的是在一个服务器实例上运行许多微服务,以便在使用少量资源时最大限度地利用资源。 I know containers are useful for this, but I'm curious if there's a way to create a dynamically scaling set of containers where I can: 我知道容器对此很有用,但我很好奇是否有办法创建一个动态扩展的容器集合,我可以:

  • Write commands such as "provision another app container on this server if the containers running this app reach > 80% CPU/other limiting metrics", 编写命令,例如“如果运行此应用程序的容器达到> 80%CPU /其他限制指标,则在此服务器上设置另一个应用程序容器”,
  • Provision and prepare other servers if needed for extra containers, 如果需要额外的容器,请提供并准备其他服务器,
  • Load balance connections between these containers (and does this affect server load balancing, eg, send less connections to servers with fewer containers?) 这些容器之间的负载平衡连接(这是否会影响服务器负载平衡,例如,减少与较少容器的服务器的连接?)

I've looked into AWS EC2, Docker Compose and nginx, but I'm uncertain if I'm going in the right direction. 我已经研究过AWS EC2,Docker Compose和nginx,但我不确定我是否正朝着正确的方向前进。

Investigate Kubernetes and/or Mesos, and you'll never look back. 调查Kubernetes和/或Mesos,你永远不会回头。 They're tailor-made for what you're looking to do. 它们是根据您的目标量身定制的。 The two components you should focus on are: 您应该关注的两个组件是:

  1. Service Discovery: This allows inter-dependent services (micro-service "A" calls "B") to "find" each other. 服务发现:这允许相互依赖的服务(微服务“A”呼叫“B”)相互“发现”。 It's typically done using DNS, but with registration features on top of it that handle what happens as instances are scaled. 它通常使用DNS完成,但在其上面具有注册功能,可以处理在实例缩放时发生的情况。

  2. Scheduling: In Docker-land, scheduling isn't about CRON jobs, it means how containers are scaled and "packed" into servers in various ways to maximize efficient usage of available resources. 调度:在Docker-land中,调度不是关于CRON作业,它意味着容器如何按各种方式扩展和“打包”到服务器中,以最大限度地有效利用可用资源。

There are actually dozens of options here: Docker Swarm, Rancher, etc. are also competing alternatives. 实际上有几十个选项:Docker Swarm,Rancher等也是竞争对手。 Many cloud vendors like Amazon also offer dedicated services (such as ECS) with these features. 像亚马逊这样的许多云供应商也提供具有这些功能的专用服务(例如ECS)。 But Kubernetes and Mesos are emerging as standard choices, so you'd be in good company if you at least start there. 但是Kubernetes和Mesos正在成为标准选择,所以如果你至少从那里开始,你就会成为一个好公司。

Metrics could be collected via Docker API ( and cool blog post ) and it's often used for that. 度量标准可以通过Docker API (以及很酷的博客文章 )收集,并且通常用于此。 Tinkering with DAPI and docker stack tools (compose/swarm/machine) could provide alot of tools to scale microservice architecture efficiently. 修补DAPI和docker堆栈工具(compose / swarm / machine)可以提供很多工具来有效地扩展微服务架构。

I could advise in favor of Consul to manage discovery in such resource-aware system. 我可以建议赞成Consul在这种资源感知系统中管理发现。

We are using AWS to host our miroservices application, and using ECS (AWS docker service) to containerize the different API. 我们使用AWS来托管我们的miroservices应用程序,并使用ECS(AWS docker service)来容纳不同的API。

And in this context, we use AWS auto scaling feature to manage the scale in and scale out. 在此背景下,我们使用AWS自动缩放功能来管理规模和扩展。 Check this . 检查一下

Hope it helps. 希望能帮助到你。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM