简体   繁体   中英

GNU GCC compiler optimization and debug

I have a simple question about Eclipse CDT and GNU GCC compiler.

The application is compiled in

  • Debug mode, ie, Optimization = None(-O0), Debugging = Maximum(-g3), versus application compiled in
  • Optimized mode, ie, Optimization = Maximum(-O3), Debugging = None.

Apart from the performance difference, is it guaranteed that the application compiled in these 2 mode generates the exactly same results?

I am about to release the application to the end-users, the application is server based, it handles several multicast data feeds. Can anyone offer some advice on which compilation mode I should choose for the final release to its end-users.

Thanks.

It is only guaranteed that your program will produce the same results if your code is fully standards-compliant. There are many ways you can write code that has "undefined behaviour" that actually works on an unoptimized build, but may break when optimized.

For example, suppose I have:

struct A
{
   int i;
};

struct B
{
   int i;
};

int main()
{
    A a;
    a.i = 10;
    B* b = reinterpret_cast<B*>(&a);
    std::cout << b->i << std::endl;
    return 0;
}

This will almost certainly print out 10, but a compiler could legitimately generate code that does something else due to strict aliasing rules

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM