简体   繁体   中英

Limiting cpu cores before multiprocessing in python

I have a program which requires multiprocessing. The function that it calls will automatically use every available core. This somehow causes a problem however, as every core is used for each of the processes, meaning each core has 100*x % load where x is the number of processes spawned. So for 6 processes, each sore is at 600% use.

The code is quite simple and uses the usual:

pool = Pool(processes=6)
for i in pool.imap_unordered(main_program, range(100)):
    print('Task in pool has finished')

This will however put every core at 600% load and be slower than doing every process individually. I assume I am using the mp module wrong, but I can't seem to figure out where.

Note: My ideal solution would be limiting the main function to be only using 1 core, however the function is not mine, but rather an application I call, and I would not know where to limit this in the source code.

Any suggestions?

Many thanks

I found the answer here

Basically, the BLAS features (of I suspect numpy) were interfering with my multiprocessing. This fixed it:

os.environ["OMP_NUM_THREADS"] = "1" # export OMP_NUM_THREADS=1
os.environ["OPENBLAS_NUM_THREADS"] = "1" # export OPENBLAS_NUM_THREADS=1
os.environ["MKL_NUM_THREADS"] = "1" # export MKL_NUM_THREADS=1
os.environ["VECLIB_MAXIMUM_THREADS"] = "1" # export VECLIB_MAXIMUM_THREADS=1
os.environ["NUMEXPR_NUM_THREADS"] = "1" # export NUMEXPR_NUM_THREADS=1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM