简体   繁体   中英

What options exist for segregating python environments in a mult-user dask.distributed cluster?

I'm specifically interested in avoiding conflicts when multiple users upload ( upload_file ) slightly different versions of the same python file or zip contents.

It would seem this is not really a supported use case as the worker process is long-running and subject to the environment changes/additions of others.

I like the library for easy, on-demand local/remote context switching, so would appreciate any insight on what options we might have, even if it means some seamless deploy-like step for user-specific worker processes.

Usually the solution to having different user environments is to launch and destroy networks of different Dask workers/schedulers on the fly on top of some other job scheduler like Kubernetes, Marathon, or Yarn.

If you need to reuse the same set of dask workers then you could also be careful about specifying the workers= keyword consistently, but this would be error prone.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM