简体   繁体   English

Python多重处理:#core与#CPU

[英]Python Multiprocessing: #cores versus #CPU's

It seems to me that using the python multiprocessing Pool.map as described here parallelizes the process to some extent between different cores of one CPU, but I have the feeling that there is no speed-up reflecting more CPU's on a computer. 在我看来,使用此处描述的python multiprocessing Pool.map 某种程度上可以使一个CPU的不同内核之间的进程并行化,但是我感觉并没有提高速度,无法反映计算机上更多的CPU。 If that's right, is there a way to effectively use the "Number of CPU's times number of cores in each CPU"? 如果是的话,有没有一种方法可以有效地使用“ CPU数乘以每个CPU的内核数”?

(Admittedly, I may be wrong because my experiments are based on a virtual Amazon cloud machine with 16 virtual CPU's but I know it's not a "real computer".) (不可否认,我可能是错的,因为我的实验是基于具有16个虚拟CPU的虚拟Amazon云机 ,但我知道它不是“真实计算机”。)

More exactly, by default the number of processes will be the number of cores presented by the OS. 更确切地说, 默认情况下 ,进程数将是操作系统提供的核心数。 If the computer uses more than one CPU, the OS should present the total number of cores to Python. 如果计算机使用多个CPU,则操作系统应向Python显示内核总数。 But anyway, you can always force the number of process to a smaller value is you do not want to use all the resources from the machine (if it is running a background server for example) or to a higher value if the task is not CPU bound but IO bound for example. 但是无论如何,如果您不想使用计算机上的所有资源(例如,如果它正在运行后台服务器),或者如果任务不是CPU,则可以始终将进程数强制为较小的值绑定,但IO绑定。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM