简体   繁体   中英

Dask force kill all workers

I want to force kill all the dask-worker processes connected to my dask.distributed scheduler. I am NOT running the cluster locally, it is a distributed cluster.

I have tried the following:

workers = scheduler.workers_to_close(n=num_workers)
scheduler.retire_workers(workers=workers,close_workers=True,remove=True)

Here, num_workers is known beforehand. However, this doesn't seem to work. Sometimes, no workers are killed whereas sometimes, only one worker is killed. Am I doing this incorrectly? Is there a better/correct way to do this?

What you are doing seems fine to me. I recommend making a minimal example and then raising an issue on github

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM