简体   繁体   中英

Optimizing throughput: multi-thread vs. multi-process

I am working on a processing intensive system which does a lot of computations. The system have two major components first is to handle input/output, and second is to process that data and compute the results. But the problem is that it is not able to handle 50 items whereas it is supposed to handle more than 1000 items at a time. Both the components have multiple threads running for different tasks. I am on Linux platform and using c++. In my understanding, in Linux systems threads and processes are almost similar other than sharing of virtual memory space. So my question is, is it a good idea to separate I/O from the the processing unit and put them in separate executables or processes and then use shared memory or message queues or any other IPC technique?

In your situation, absolutely not. Using different processes is done for security: If one process crashes, the other continues. If a hacker manages into one process, you may be able to limit its permissions so that the hacker cannot do anything harmful (bugs in your code can't do anything harmful either in that case).

Use whatever profiling tools you have available. Today's computers are so fast that most of the time when a task is running too slow, it's down to something stupid that the application does, not due to some missing optimisation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM