简体   繁体   中英

OpenGL Multithreading slower than using a single thread

I'm using Windows 7 and using VC++ 2010 and this is a 32 bit application

I am trying to get my renderer to work multithreaded but as it turns out I made it slower than without using multiple threads.

I want it to have the main thread adding rendering commands to a list, and a worker thread that does the rendering of these commands.

This all does happen, and it draws to the screen fine, but I get less fps when doing so...

I used the benchmark tool in Fraps to get this data:

Time is the time it was benchmarked for, in this case 30 seconds.

Min, max, avg are all FPS values.

With Multithreading:

    Frames, Time (ms), Min, Max, Avg
     28100,     30000, 861,1025, 936.667

Without multithreading:

Frames, Time (ms), Min, Max, Avg
 21483,     30000, 565, 755, 716.100

Here is some pseudocode (with the relevant event function calls):

Main Thread:
    Add render comands to queue
    ResetEvent (renderCompletedEvent);
    SetEvent (renderCommandsEvent);
    WaitForSingleObject (renderCompletedEvent, INFINITE);

Render Thread:
    WaitForSingleObject (renderCommandsEvent, INFINITE);
    Process commands
    SetEvent (renderCompletedEvent);
    ResetEvent (renderCommandsEvent);

Why would you expect this to be faster?

Only one thread is ever doing anything, you create the commands in one thread and the signal the other and wait for it to finish which will take just as long as just doing it in the first thread, only with more overhead.

TO take advantage of multithreading you need to ensure that both threads are doing something at the same time.

I am no opengl expert, but in general it is important to realize that threads are actually not used to speed things up, they are to guarantee that some subsystem is responsive at the cost of overall speed . That is one might keep a gui thread and a networking thread to ensure that the gui and networking are responsive. That is actually done at a performance cost to the main thread. The CPU is going to give 1/3 of its time to the main thread, 1/3 of its time to the networking thread and 1/3 of its time to the gui thread, even if there are no gui events to handle and nothing going in or out of the network. Thus whatever the main thread is doing gets only 1/3 of the CPU time that it would in a non-multithreaded situation. The upside is that if a lot of data starts arriving over the network, there is always CPU time reserved to handle it (which can be bad if there isn't as the networking buffer can be filled and then additional data starts being dropped or overwritten). The possible exception is that if multiple threads are running on different cores. However, even then be careful, cores can share the same caches, so if two cores are invalidating each other caches, performance could drop dramatically, not improve. If the cores share some resource to move data to and from the GPU or has some other shared limiting resource, this again could possibly cause performance losses, not gains.

In short, threading on a single CPU system is always about responsiveness of a subsystem, not performance. There are possible perfomance gains when different threads run on multiple cores (which windows doesn't seem to usually do by default, but it can be forced). However there are potential issues with doing this when those cores share some resource that could potentially hurt, not help, performance, eg shared cache space or some shared GPU related resource in your context.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM