简体   繁体   中英

C99 variable length automatic array performance

Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform?

int function(int N) {
    double array[N];
  • overhead compared to allocating array before hand (assuming function is called multiple times)

  • overhead compared to using new

  • overhead compared to using malloc

The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

The difference in performance between a VLA and a statically-sized array should be negligible. You may need a few extra instructions to calculate how much to grow the stack but that should be noise in any real program.

Hmm, on further thought, there could also be some overhead depending on how the local variables are layed out in memory and whether there are multiple VLAs.

Consider the case where you have the locals (and assume they are put in memory in the order they are specified).

int x;
int arr1[n];
int arr2[n];

Now, whenever you need to access arr2 , the code needs to calculate the location of arr2 relative to your base pointer.

  • Review the assembly output
  • Profile it, for your application
  • Check your memory usage

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM