简体   繁体   中英

python multiprocessing, cpu-s and cpu cores

I was trying out python3 multiprocessing on a machine that has 8 cpu-s and each cpu has four cores (information is from /proc/cpuinfo ). I wrote a little script with a useless function and I use time to see how long it takes for it to finish.

from multiprocessing import Pool,cpu_count
def f(x):
    for i in range(100000000):
        x*x
    return x*x
with Pool(8) as p:
    a = p.map(f,[x for x in range(8)])
#~ f(5)

Calling f() without multiprocessing takes about 7s ( time 's "real" output). Calling f() 8 times with a pool of 8 as seen above, takes around 7s again. If I call it 8 times with a pool of 4 I get around 13.5s, so there's some overhead in starting the script, but it runs twice as long. So far so good. Now here comes the part that I do not understand. If there are 8 cpu-s each with 4 cores, if I call it 32 times with a pool of 32, as far as I see it should run for around 7s again, but it takes 32s which is actually slightly longer than running f() 32 times on a pool of 8.

So my question is multiprocessing not able to make use of cores or I don't understand something about cores or is it something else?

Simplified and short.. Cpu-s and cores are hardware that your computer have. On this hardware there is a operating system, the middleman between hardware and the programs running on the computer. The programs running on the computer are allotted cpu time. One of these programs is the python interpetar, which runs all the programs that has the endswith .py. So of the cpu time on your computer, time is allotted to python3.* which in turn allot time to the program you are running. This speed will depend on what hardware you have, what operation you are running, and how cpu-time is allotted between all these instances.

How is cpu-time allotted? It is an like an while loop, the OS is distributing incremental between programs, and the python interpreter is incremental distributing it's distribution time to programs runned by the python interpretor. This is the reason the entire computer halts when a program misbehaves.

Many processes does not equal more access to hardware. It does equal more allotted cpu-time from the python interpretor allotted time. Since you increase the number of programs under the python interpretor which do work for your application.

Many processes does equal more work horses.


You see this in practice in your code. You increase the number of workhorses to the point where the python interpreters allotted cpu-time is divided up between so many processes that all of them slows down.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM