简体   繁体   中英

Weird behavior from pythons multiprocessing

The code what I am trying is:

def update_vm(si, vm):
    env.host_string = vm
    with settings(user=VM_USER, key_filename=inputs['ssh_key_path']):
        put(local_file, remote_zip_file)
        run('tar -zxpf %s' % remote_zip_file)
        run('sudo sh %s' % REMOTE_UPDATE_SCRIPT)
        response_msg = run('cat %s' % REMOTE_RESPONSE_FILE)
        if 'success' in response_msg:
            #do stuff
        else:
            #do stuff

def update_vm_wrapper(args):
    return update_vm(*args)

def main():
    try:
        si = get_connection()
        vms = [vm1, vm2, vm3...]
        update_jobs = [(si, vm) for vm in vms]
        pool = Pool(30)
        pool.map(update_vm_wrapper, update_jobs)
        pool.close()
        pool.join()
    except Exception as e:
        print e
if __name__ == "__main__":
    main()

Now the problem is I saw it is trying to put the zip file inside same vm(say vm1)for 3 times(I guess the length of vms ). And trying to execute the other ssh commands 3 times.

Using locks for the update_vm() method is solving the issue. But it looks no longer a multiprocessor solution. It more like iterating over a loop.

What wrong am I doing here ?

Fabric has its own facilities for parallel execution of tasks - you should use those, rather than just trying to execute Fabric tasks in multiprocessing pools. The problem is that the env object is mutated when executing the tasks, so the different workers are stepping on each other (unless you put locking in).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM