简体   繁体   English

如何使Fabric执行遵循env.hosts列表顺序?

[英]How to make Fabric execution follow the env.hosts list order?

I have the following fabfile.py: 我有以下fabfile.py:

from fabric.api import env, run

host1 = '192.168.200.181'
host2 = '192.168.200.182'
host3 = '192.168.200.183'

env.hosts = [host1, host2, host3]

def df_h():
    run("df -h | grep sda3")

And I get the following output: 我得到以下输出:

[192.168.200.181] run: df -h | grep sda3
[192.168.200.181] out: /dev/sda3             365G  180G  185G  50% /usr/local/nwe
[192.168.200.183] run: df -h | grep sda3
[192.168.200.183] out: /dev/sda3             365G   41G  324G  12% /usr/local/nwe
[192.168.200.182] run: df -h | grep sda3
[192.168.200.182] out: /dev/sda3             365G   87G  279G  24% /usr/local/nwe

Done.
Disconnecting from 192.168.200.182... done.
Disconnecting from 192.168.200.181... done.
Disconnecting from 192.168.200.183... done.

Note that the execution order is different from the env.hosts specification. 请注意,执行顺序与env.hosts规范不同。

Why does it work this way? 为什么这样工作? Is there a way to make the execution order the same as specified in env.hosts list? 有没有办法使执行顺序与env.hosts列表中指定的相同?

The exact reason that the order is not preserved from env.hosts is that there are three "levels" that the hosts to operate can be specified--env.hosts, the command line, and per function--which are merged together. 不从env.hosts保留订单的确切原因是可以指定要操作的主机的三个“级别” - env.hosts,命令行和每个函数 - 它们合并在一起。 In fabric/main.py on line 309 , you can see that they use the set() type to remove duplicates in the three possible lists of hosts. 第309行的 fabric/main.py ,您可以看到它们使用set()类型删除三个可能的主机列表中的重复项。 Since set() does not have an order, the hosts will be returned as a list in "random" order. 由于set()没有订单,因此主机将以“随机”顺序作为列表返回。

There's a pretty good reason that this is method. 这是方法的一个很好的理由。 It's a very efficient mechanism for removing duplicates from a list and for fabric it's important that order doesn't matter. 这是从列表中删除重复项的非常有效的机制,对于结构来说,顺序无关紧要。 You're asking fabric to perform a series of completely parallel, atomic actions on various hosts. 您要求fabric在各种主机上执行一系列完全并行的原子操作。 By the very nature of parallel, atomic actions, order does not effect the ability of the actions to be performed successfully. 根据并行原子动作的本质,顺序不会影响动作成功执行的能力。 If order did matter, then a different strategy would be necessary and fabric would no longer be the correct tool for the job. 如果订单确实很重要,那么就需要采用不同的策略,结构将不再是工作的正确工具。

That said, is there a particular reason that you need these operations to occur in order? 也就是说,您是否需要按顺序进行这些操作? Perhaps if you're having some sort of problem that's a result of execution order, we can help you work that out. 也许如果你遇到某种由执行命令引起的问题,我们可以帮助你解决这个问题。

Just to update, newest Fabric 1.1+ (think even 1.0) dedupes in an order preserving way now. 只是为了更新,最新的Fabric 1.1+(甚至是1.0)在现在保留订单的方式中进行重复数据删除。 So this should be a non-issue now. 所以现在这应该是一个非问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM