简体   繁体   English

使用Fabric部署到多个EC2服务器

[英]Deploying to multiple EC2 servers with Fabric

I'm wondering if anyone has experience deploying to multiple servers behind a load balancer on ec2 with fabric 我想知道是否有人在使用结构的 ec2上部署到负载平衡器后面的多个服务器的经验

I have used fabric for a while now, and have no issues with it, or deploying to multiple servers, but what I would like to do in this scenario is (lets say I have ten instances running) de-register half (5) of the boxes from my load balancer, deploy my code to them and run a smoke test, and if everything looks good, register them with the load balancer again and de-register the remaining 5 instances and deploy to them, and then register them back to the load balancer. 我现在已经使用了一段时间的Fabric,并且没有问题,或者部署到多个服务器,但我想在这个场景中做的是(假设我有十个实例运行)de-register half(5)of来自我的负载均衡器的盒子,将我的代码部署到它们并运行冒烟测试,如果一切看起来都很好,再次使用负载均衡器注册它们并取消注册剩余的5个实例并部署到它们,然后将它们注册回负载均衡器。

I have no problem accomplishing any of the individual tasks (de-registering, running tests, deploying etc), I just don't know how to organize my hosts in a simple fashion so that I can deploy the first half, then the second half. 我完成任何单个任务(取消注册,运行测试,部署等)都没有问题,我只是不知道如何以简单的方式组织我的主机,以便我可以部署上半部分,然后是下半部分。 Fabric seems to be set up to run the same tasks on all hosts in order (task 1 on host 1, task 1 on host 2, task 2 on host 1, task 2 on host 2 etc etc) Fabric似乎设置为按顺序在所有主机上运行相同的任务(主机1上的任务1,主机2上的任务1,主机1上的任务2,主机2上的任务2等)

My first thought was to create a task to handle the first part of de-registering, deploying and testing, and then set the env.hosts for the second half of the servers, but i felt this seemed a bit hokey. 我的第一个想法是创建一个任务来处理取消注册,部署和测试的第一部分,然后为服务器的后半部分设置env.hosts,但我觉得这似乎有点过分了。

Has anyone modeled something similar to this with Fabric before? 以前有没有人用Fabric模仿类似的东西?

You can simplify this by defining roles (used for aggregation of hosts) and executing your tasks on one role, then running tests and deploying on the second role. 您可以通过定义角色 (用于主机聚合)和在一个角色上执行任务,然后在第二个角色上运行测试和部署来简化此操作。

Example of roledefs : roledefs示例:

env.roledefs = {
    'first_half': ['host1', 'host2'],
    'second_half': ['host3', 'host4'],
}

def deploy_server():
    ...
    # deploy one host from current role here

def deploy():
    # first role:
    env.roles = ['first_half']
    execute('deploy_server')
    test()  # here test deployed servers
    # second role:
    env.roles = ['second_half']
    execute('deploy_server')

More links: 更多链接:

You want to use the execute() function. 您想使用execute()函数。 This will allow you to do something like this: 这将允许您执行以下操作:

def update():
     deallocate()
     push_code()
     smoke_test() #could fail fast
     reallocate()

def deploy():
     execute(update, hosts=first_five)
     execute(update, hosts=last_five)

You could also make each of the deallocate, push_code, and smoke_test, tasks an execute() call in the deploy, and then you'd run all the deallocates then run all the code pushes, etc. 您还可以在部署中对每个deallocate,push_code和smoke_test任务执行execute()调用,然后运行所有deallocates然后运行所有代码推送等。

Then have a check of some sort and then proceed with the others running said tasks. 然后进行某种检查,然后继续运行所述任务的其他人。

I've successfully combined Fabric with boto. 我已成功将Fabric与boto结合起来。 I populate the hosts list using boto. 我使用boto填充主机列表。 You can use the @parallel decorator to limit the number of hosts to execute in one go. 您可以使用@parallel装饰器来限制一次执行的主机数量。 The command looks as follows; 该命令如下所示;

fab running deploy

The code looks like so; 代码看起来像这样;

@task
@runs_once
def running():
    ec2conn = ec2.connect_to_region(region)
    reservations = ec2conn.get_all_instances(filters={'instance-state-name': 'running'})
    instances = list(chain.from_iterable(map(lambda r: r.instances, reservations)))
    env.hosts = list(chain.from_iterable(map(lambda i: i.public_dns_name, instances)))

@task
@parallel(pool_size=5)
def deploy():
    # do stuff on n<=5 hosts in parallel

If you need to handle a subsection of hosts I'd suggest using tags. 如果您需要处理主机的子部分,我建议使用标签。

Fabric is not set-up to run the same tasks on all hosts. Fabric未设置为在所有主机上运行相同的任务。

Apart from the fact that you can explicitly set the hosts for a specific task with the -H command line parameter, you can use this pattern and this newer pattern to do exactly what you want. 除了可以使用-H命令行参数显式设置特定任务的主机之外,您还可以使用模式和新模式完全按照您的意愿执行操作。

Update: Here it shows how you can use roles 更新: 此处显示了如何使用roles

Or you could simply write a method which sets some variables, for example: 或者你可以简单地编写一个设置一些变量的方法,例如:

def live():
    global PATH, ENV_PATH
    env.hosts = ["22.2.222.2"]
    env.user = 'test'
    PATH = '/path/to/project'
    # optional, is using virtualenv
    ENV_PATH = '/path/to/virtualenv'
    # overwri

te whatever variabled you need to change on the current machine 在当前机器上需要更改的任何变量

and before running the deploy command, run: 在运行deploy命令之前,运行:

fab live deploy

Details: http://simionbaws.ro/programming/deploy-with-fabric-on-multiple-servers/ 详细信息: http//simionbaws.ro/programming/deploy-with-fabric-on-multiple-servers/

Rather than meddle with env.hosts, you could pass a list (or any iterable) to the hosts decorator. 您可以将列表(或任何可迭代的)传递给主机装饰器,而不是插入env.hosts。 Something like: 就像是:

def deploy(half_my_hosts):
    @hosts(half_my_hosts)
    def mytask():
        # ...
    mytask()

Then you could split your env.hosts in any way you like and pass it to deploy() 然后你可以用你喜欢的任何方式拆分你的env.hosts并将它传递给deploy()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM