简体   繁体   中英

Performance penalty in C++

I have a question concerning any perfomance penalty in the call of a member function in the following situation:

I am working on a code for physical computations, and there are lots of time demanding tasks like to manipulate huge matrices, linear algebra etc. I have designed a class to handle a log file, and there is a member function to write it if a bool type called debug_mode_on is true . The function signature is

void write_debug_msg(const data_type1 &text1, const data_type2 &text2, etc)

An inlined, templated and overloaded one. It can receive up to 15 arguments as input, of any type.

Ok... so the problem is, when debug_mode_on is false the function is called and nothing is done. Otherwise, obviously, the arguments are writen in the log file. Is there then any kind of considerable penalty performance? My point is, the fact of it is a void function type, there is no returning things. All the arguments are given as references. And it is also inlined. From my point of view the only real penalty is the evaluation of the bool type (not a if but a switch statement). Is that right?

Or the call of an inlined void function with arguments given by reference can be somehow expensive and we are talking about of an evaluation more than a switch statement?

Of course, we are not following the same strategy used to attack this problem, which is to put all of things concerning the debug mode enclosed in some kind of macros, like #ifdef DEBUG_MODE and #endif . We are doing in such way just to be able to control the debug mode in the runtime.

The only way to be sure of any performance degradation is to run your code on a profiler, and see if the code is even worth optimizing for performance. Otherwise, you are spending your time worrying about a problem that may or may not be present.

You mentioned "file", which implies I/O. Regardless of any caching, there is a lot more "processing" done (relatively speaking) vs. when debug_mode_on is false. Even if it is just formatting data to text and putting it in a RAM buffer - that can be huge (again, relatively speaking).

Could it also be that you are "logging" deep within some of your algorithms where it might be called "millions" of times (think O(NlogN) or O(N^2) at the inner-most segments of some matrix algorithms).

I'm gonna say try profiling your app and see where and/or how much of it is spent within write_debug_msg.

There's probably no performance impact. However, if you're really concerned, you should profile. You can either profile your entire application, or replicate the relevant call semantics in a small stand-alone application and use something like gprof.

The problem is that the places where this debuggin messages functions are writen are completely arbitrary, i mean, up to each developer in this code (im woking with humberto). If these functions are always checking if this variable debug_mode_on is true or not, i was afraid that maybe there'd be some lost of performance if they are invoked too often. It would be ok for a testing version of the code, but why the final version ( the one that the user deal with) should be dependent on where did i put a debug message or not? That was why i suggested to use #ifdef DEBUG of whatever statements in the code to separate both versions during compilation. I'm not an expert in these things, so maybe i am worried about something negligible. All suggestions will be wellcome. I just wanted to give my point of view. Thanks. (Remember that this is a chemical physics code and there are some algorithms that scale really bad whit the system size per se)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM