简体   繁体   中英

C++ Array vs vector

when using C++ vector, time spent is 718 milliseconds, while when I use Array, time is almost 0 milliseconds.

Why so much performance difference?

int _tmain(int argc, _TCHAR* argv[])
{
const int size = 10000; 
clock_t start, end; 
start = clock();
vector<int> v(size*size); 
for(int i = 0; i < size; i++)
{  
    for(int j = 0; j < size; j++)
    {   
        v[i*size+j] = 1;  
    } 
} 
end = clock();
cout<< (end - start)
    <<" milliseconds."<<endl; // 718 milliseconds

int f = 0;
start = clock(); 
int arr[size*size]; 
for(int i = 0; i < size; i++)
{  
    for(int j = 0; j < size; j++)
    {   
        arr[i*size+j] = 1;  
    } 
} 
end = clock();
cout<< ( end - start)
    <<" milliseconds."<<endl; // 0 milliseconds
return 0;
}

Your array arr is allocated on the stack, ie, the compiler has calculated the necessary space at compile time. At the beginning of the method, the compiler will insert an assembler statement like

sub esp, 10000*10000*sizeof(int)

which means the stack pointer ( esp ) is decreased by 10000 * 10000 * sizeof(int) bytes to make room for an array of 10000 2 integers. This operation is almost instant.

The vector is heap allocated and heap allocation is much more expensive. When the vector allocates the required memory, it has to ask the operating system for a contiguous chunk of memory and the operating system will have to perform significant work to find this chunk of memory.

As Andreas says in the comments, all your time is spent in this line:

vector<int> v(size*size); 

Accessing the vector inside the loop is just as fast as for the array.

For an additional overview see eg

Edit:

After all the comments about performance optimizations and compiler settings, I did some measurements this morning. I had to set size=3000 so I did my measurements with roughly a tenth of the original entries. All measurements performed on a 2.66 GHz Xeon:

  1. With debug settings in Visual Studio 2008 (no optimization, runtime checks, and debug runtime) the vector test took 920 ms compared to 0 ms for the array test.

    98,48 % of the total time was spent in vector::operator[] , ie, the time was indeed spent on the runtime checks.

  2. With full optimization, the vector test needed 56 ms (with a tenth of the original number of entries) compared to 0 ms for the array.

    The vector ctor required 61,72 % of the total application running time.

So I guess everybody is right depending on the compiler settings used. The OP's timing suggests an optimized build or an STL without runtime checks.

As always, the morale is: profile first, optimize second.

If you are compiling this with a Microsoft compiler, to make it a fair comparison you need to switch off iterator security checks and iterator debugging, by defining _SECURE_SCL=0 and _HAS_ITERATOR_DEBUGGING=0.

Secondly, the constructor you are using initialises each vector value with zero, and you are not memsetting the array to zero before filling it. So you are traversing the vector twice.

Try:

vector<int> v; 
v.reserve(size*size);

Change assignment to eg. arr[i*size+j] = i*j , or some other non-constant expression. I think compiler optimizes away whole loop, as assigned values are never used, or replaces array with some precalculated values, so that loop isn't even executed and you get 0 milliseconds.

Having changed 1 to i*j , i get the same timings for both vector and array, unless pass -O1 flag to gcc, then in both cases I get 0 milliseconds.

So, first of all, double-check whether your loops are actually executed.

To get a fair comparison I think something like the following should be suitable:

#include <sys/time.h>
#include <vector>
#include <iostream>
#include <algorithm>
#include <numeric>


int main()
{
  static size_t const size = 7e6;

  timeval start, end;
  int sum;

  gettimeofday(&start, 0);
  {
    std::vector<int> v(size, 1);
    sum = std::accumulate(v.begin(), v.end(), 0);
  }
  gettimeofday(&end, 0);

  std::cout << "= vector =" << std::endl
        << "(" << end.tv_sec - start.tv_sec
        << " s, " << end.tv_usec - start.tv_usec
        << " us)" << std::endl
        << "sum = " << sum << std::endl << std::endl;

  gettimeofday(&start, 0);
  int * const arr =  new int[size];
  std::fill(arr, arr + size, 1);
  sum = std::accumulate(arr, arr + size, 0);
  delete [] arr;
  gettimeofday(&end, 0);

  std::cout << "= Simple array =" << std::endl
        << "(" << end.tv_sec - start.tv_sec
        << " s, " << end.tv_usec - start.tv_usec
        << " us)" << std::endl
        << "sum = " << sum << std::endl << std::endl;
}

In both cases, dynamic allocation and deallocation is performed, as well as accesses to elements.

On my Linux box:

$ g++ -O2 foo.cpp 
$ ./a.out 
= vector =
(0 s, 21085 us)
sum = 7000000

= Simple array =
(0 s, 21148 us)
sum = 7000000

Both the std::vector<> and array cases have comparable performance. The point is that std::vector<> can be just as fast as a simple array if your code is structured appropriately.


On a related note switching off optimization makes a huge difference in this case:

$ g++ foo.cpp 
$ ./a.out 
= vector =
(0 s, 120357 us)
sum = 7000000

= Simple array =
(0 s, 60569 us)
sum = 7000000

Many of the optimization assertions made by folks like Neil and jalf are entirely correct.

HTH!

EDIT : Corrected code to force vector destruction to be included in time measurement.

You are probably using VC++, in which case by default standard library components perform many checks at run-time (eg whether index is in range). These checks can be turned off by defining some macros as 0 (I think _SECURE_SCL ).

Another thing is that I can't even run your code as is: the automatic array is way too large for the stack. When I make it global, then with MingW 3.5 the times I get are 627 ms for the vector and 26875 ms (!!) for the array, which indicates there are really big problems with an array of this size.

As to this particular operation (filling with value 1), you could use the vector's constructor:

std::vector<int> v(size * size, 1);

and the fill algorithm for the array:

std::fill(arr, arr + size * size, 1);

Two things. One, operator[] is much slower for vector. Two, vector in most implementations will behave weird at times when you add in one element at a time. I don't mean just that it allocates more memory but it does some genuinely bizarre things at times.

The first one is the main issue. For a mere million bytes, even reallocating the memory a dozen times should not take long (it won't do it on every added element).

In my experiments, preallocating doesn't change its slowness much. When the contents are actual objects it basically grinds to a halt if you try to do something simple like sort it.

Conclusion, don't use stl or mfc vectors for anything large or computation heavy. They are implemented poorly/slowly and cause lots of memory fragmentation.

When you declare the array, it lives in the stack (or in static memory zone), which it's very fast, but can't increase its size.

When you declare the vector, it assign dynamic memory, which it's not so fast, but is more flexible in the memory allocation, so you can change the size and not dimension it to the maximum size.

When profiling code, make sure you are comparing similar things.

vector<int> v(size*size); 

initializes each element in the vector,

int arr[size*size]; 

doesn't. Try

int arr[size * size];
memset( arr, 0, size * size );

and measure again...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM