简体   繁体   中英

Will using CPU timer control over threads for two seperate operations cause a decrease in performance?

I'm writing a program in C++ that has to run two operations concurrently (or at least make it appear that it's doing so). I've always read that threads are the best solution for this, but I am not too familiar with them and instead opted for timer-based control based on milliseconds to split CPU resources back and forth between operation 1 and operation 2, as I plan to learn about multithreading at a later time.

My program is running as it should functionally, but it appears to be slow, sluggish, and somewhat choppy. Could this be due to the way I allocate resources using the timer? I'm doing light vision processing within my first operation, and updating a GUI within my second operation.

For reference, I'm using an Intel i3-3110M and 4 GB of DDR3 RAM.

For one thing, using a timer instead of threads means that only one core on your CPU can be used at a time. With threads, your two tasks could (at least in principle) each get their own core and so they could both run literally simultaneously with each other, giving you a potential 2x speedup.

A second problem with using a timer is really a corollary of the first: if the routine called by timer-event-number-1 takes longer than expected to complete, then by necessity the routine called by timer-event-number-2 will not be able to start until after the first routine has returned, and so it will start late. If taking longer than expected happens commonly (or every time) then the calls to each routine will get increasingly farther "behind schedule" as time goes on.

A third problem with timers is knowing how long to make the delays. Perhaps with a vision-processing program it's obvious (eg if video is coming in at 20fps, then you might want to set the routines to execute once every 50mS), but for tasks that are logically dependent on each other (eg the second routine is supposed to consume the result produced by the first routine) this is a waste of CPU cycles, since the CPU may end up waiting for no good reason to process data at a specific time that it might just as well have processed sooner and gotten out of the way. In cases like this it is usually better to use some sort of logical triggering mechanism (eg second routine is called by the first routine just before the first routine returns, or in the multithreaded case, have the first routine signal a semaphore or something to wake the second thread up immediately). Even in the video-processing case it's usually better to have the first routine triggered by the receipt of a video frame than by a timer, since if the timer goes off late you've wasted valuable processing time, and if it goes off early then there won't be any video frame available yet to process.

As for your particular program, its poor performance might be due to your use of a timer, or it might just be that your routines aren't efficient enough to get their work done in the amount of time you've allotted for them to do it. I suggest running your program under a profiler and finding out where it is spending most of its time, and then investigating ways to make that part of the program more efficient (and then test and profile again, and repeat until you're satisfied with the program's performance).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM