简体   繁体   中英

Performance running multiple Python scripts simultaneously

Fairly new 'programmer' here, trying to understand how Python interacts with Windows when multiple unrelated scripts are run simultaneously, for example from Task Manager or just starting them manually from IDLE. The scripts just make http calls and write files to disk, and environment is 3.6.

Is the interpreter able to draw resources from the OS (processor/memory/disk) independently such that the time to complete each script is more or less the same as it would be if it were the only script running (assuming the scripts cumulatively get nowhere near using up all the CPU or memory)? If so, what are the limitations (number of scripts, etc.).

Pardon mistakes in terminology. Note the quotes on 'programmer'.

how Python interacts with Windows

Python is an executable, a program. When a program is executed a new process is created.

python myscript.py starts a new python.exe process where the first argument is your script.

when multiple unrelated scripts are run simultaneously

They are multiple processes.

Is the interpreter able to draw resources from the OS (processor/memory/disk) independently?

Yes. Each process may access the OS API however it wishes, to the extend that it is possible.

What are the limitations?

Most likely RAM. The same limitations as any other process might encounter.

These are difficult questions to answer, in part because they depend on:

  • Your operating system: Your OS gets to schedule and run tasks when it wants, which the Python programmer often does not have control over.

  • What your scripts are actually doing: If your scripts are all trying to write to the same drive, their execution may be halted more often than if no device was being written to. Or the script might run even faster if only one script writes to the drive, as the CPU can let one script calculate when another script writes. (It's hard to tell without benchmark testing.)

  • How many CPUs you're using: The number of Central Processing Units can improve parallel processing of programs -- but perhaps not. If your programs are constantly reading and writing from the same disk, more CPUs may not be a benefit.

  • Your Python version: (I'm just adding this for completeness.)

Ultimately, the only way you're going to get any real information on this is if you do your own benchmarking -- and even then, you should remember that those figures you find are only applicable to your current setup. That is, if you go to another computer elsewhere, you may find you get different results.

If you aren't familiar with Python's timeit module, I recommend you look into it. (I'm pretty sure it's a standard module, so you should already have it.) It'll help you do benchmark testing and let you get some definitive answers for your platform.

By asking questions like yours, you may soon hear about Python's GIL (Global Interpreter Lock). It has to do with Python threads, and some people think it's a blessing, and some think it's a curse. Either way, this page:

https://realpython.com/python-gil/

has a good high-level explanation of it when it can work well and when it might not.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM