简体   繁体   中英

Why does GCD increase execution time?

I try to learn Grand Central Dispatch (GCD) and use the following code to test:

With GCD:

#include <dispatch/dispatch.h>
#include <vector>
#include <cstdlib>
#include <iostream>

int main(int argc, char *argv[])  
{
   const int N = atoi(argv[1]);
   __block std::vector<int> a(N, 0);
   dispatch_apply(N, 
     dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), 
     ^(size_t i) 
     { 
       a[i] = i;
#ifdef DEBUG           
       if ( i % atoi(argv[2]) == 0)
         std::cout << a[i] <<  std::endl;
#endif
     });
  return 0;
}

Without GCD:

#include <vector>
#include <cstdlib>
#include <iostream> 

int main(int argc, char *argv[]) 
{
  const int N = atoi(argv[1]);
  std::vector<int> a(N, 0);
  for (int i = 0; i < N; i++)   
    {
      a[i] = i;
#ifdef DEBUG
      if ( i % atoi(argv[2]) == 0)
    std::cout << a[i] <<  std::endl;
#endif
    }
 return 0;
}

The test result with GCD:

$ time ./testgcd 100000000 10000000
4.254 secs

The test without GCD:

$ time ./nogcd 100000000 10000000
1.462 secs

I thought that GCD should reduce execution time, but the results show the opposite. I am not sure if I misuse GCD. The OS environment is Mac OS X 10.8 with Xcode 4.5. The compiler is Clang++ 3.1. The hardware is Macbook Pro with i5 CPU, which has two cores.

For comparison, I use OpenMP (also using GCC shipped with Xcode 4.5 on the same laptop):

#include <vector> 
#include <cstdlib>

int main(int argc, char *argv[])  
{
  const int N = atoi(argv[1]);
  std::vector <int> a(N, 0);
  #pragma omp parallel for
  for (int i = 0; i < N; i++)
    a[i] = i;
  return 0;
}

and w/wo (-fopenmp), I have two executables to test,

with -fopenmp flag while compiling:

$ time ./testopenmp 100000000
1.280 secs

without -fopenmp flag while compiling:

$ time ./testnoopenmp 100000000
1.626 secs

With OpenMP, the executing time are reduced.

GCD does not necessarily have to increase an execution time. The reason why it does so in your case is because you are doing it wrong. It is important that you know why your application is slow in the first place. So I went and ran your code under multi-core profiler (Instruments.app), and here is what it shows:

多核分析屏幕截图

As you can see, the graph is mostly yellow. Yellow means that a thread is doing nothing, and waiting for some task to execute. Green means that it executes a task. In other words, the way you have written your code, the application spends 99% of its time passing tasks around, and each task execution takes almost no time — way too much overhead. So why does this happen?

Because you have scheduled about 100000000 tasks to run. Running each task has some overhead, which is by far greater than assigning an integer to an array. The rule of thumb is not to schedule a task if its complexity is less than that of a inter-thread communication.

So how do you fix this? Schedule less tasks, do more in each task. For example:

int main(int argc, char *argv[])
{
   const int N = atoi(argv[1]);
   __block std::vector<int> a(N, 0);
   dispatch_apply(4,
     dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),
     ^(size_t iN)
     {
         size_t s = a.size()/4;
         size_t i = (s*iN);
         size_t n = i + s;
         //printf("Iteration #%lu [%lu, %lu]\n", iN, i, n);
         while (i < n) {
             a[i] = i++;
         }
     });
  return 0;
}

Now, the profiler shows the following:

没有那么糟糕

Run the test again and GCD is a little bit faster:

$ time ./test_nogcd 100000000 10000000

real    0m0.516s
user    0m0.378s
sys 0m0.138s
$ time ./test_gcd 100000000 10000000

real    0m0.507s
user    0m0.556s
sys 0m0.138s

Perhaps running less tasks will make it better? Try it out. With such a simple workflow, chances are that you are much better off using a single-threaded SIMD implementation. Or maybe not :)

Note that you have to take extra care in some situations, for example, when a total size cannot be divided into N equal parts, etc. I have omitted all error checking for simplicity.

Also, there are tons of nuances when it comes to paralleling tasks on today's commodity hardware. I'd recommend you get yourself familiar with MESI, false sharing, memory barriers, CPU caches, cache oblivious algorithms, etc. And remember - always use a profiler!

Hope it helps. Good Luck!

GCD will not magically reduce overall execution time and its use definitely has a cost: think, eg, about the fact that statements like dispatch_apply_* , and all the behind-the-scene management that they imply, must cost some time. (Now, it seems to me that 2.5 secs are too a long time for such management, but I am not able now to assess the validity of your result). The end result is that GCD might improve your performance, if you use it correctly (in the right scenario) and if your hardware allows it.

Possibly the feature of GCD that leads you to believe that is GCD ability to execute a task in an asynchronous way in a separate thread. This, by itself and in both cases, does not necessarily lead to shorter overall execution time, but it can help to improve the app responsiveness, by, eg, not allowing the UI to freeze.

Besides that, if a CPU has got more cores, or you have a multi-cpu system, and threads are scheduled on different cores/cpus, then GCD might improve overall execution time because two (actually, up to the number of cores) different tasks would be executed in parallel. In such case, the overall duration of the two tasks would be equal to the longer task duration (+ management cost).

Once clarified this, going into more detail regarding your example, you can also notice the following:

  1. you are scheduling N tasks on the same secondary thread: those tasks will be executed sequentially even on a multi-core system;

  2. the only other thread doing things, the one which is running main, is not doing anything lengthy, so the overall duration of your program is uniquely determined by the duration of the tasks at point 1;

  3. finally, if you take into account the nature of the task, you see it is just an assignment that you execute N times. Now, in the GCD case, for each such assignment you queue a task and later execute it on the secondary thread; in the non-GCD case, you are simply iterating a for loop to execute the N assignments, which gives you the fastest time of all. In the former case, for each assignment you are also paying for queueing+scheduling of the task.

Possibly this is not the most significant scenario in which you might want to measure the benefit of GCD, while it could be a good one to measure the cost of GCD in terms of performance (it looks like a worst-case scenario to me).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM