简体   繁体   中英

Does checking constant TRUE or FALSE take much operational time?

Consider a searching algorithm as an example, let it be a sequential searching algorithm. They say the less the amount of conditions a programme checks, the faster the algorithm is. A condition here is considered to be a comparison between some values (like checking equality of values). That is, if there is a comparison in a programme, it does affect the programme performance. And the greater the amount of these comparisons is, the more it takes the programme to terminate.

while(comparison)
{body;}

But I wonder how would a programme behave if there were no comparison as a condition and some literal expression instead, like:

while(TRUE)
{body;}

The question is: how different are a literal and a comparison conditions if estimated by performance?

To make this question clearer, consider these pieces of code:

while(condition)
{body;}

while(TRUE)
{        
    if (condition) break;
    body;
}

And maybe the third one as well which is quite close to the second one:

char done = 0;
while (!done)
{        
    if (condition) done++;
    else body;
}

Which of these if the fastest one? Are the second and the third ones processed slowlier than the first one?

You cannot say a priori anything about the speed. The compiler of C translates the code in many intermediary languages and during each transformation it changes something in code representation. Nowadays compilers can make lots of changes to the code. The only way to understand the performance of the final code is to check the output assembly. Normally any compiler translates nowadays while(TRUE) {...} in a LOOP: ... goto LOOP as time as it can prove that TRUE is non-zero.

Your question makes sense in the context of minimal compilation or a simple interpretation of the code, when it is faster to use internal functions than using sentinels to represent boolean values.

In this simple example the assembly output is exacly the same:

void f1()
{
    int value = 1;
    while(value) {
        scanf("%d", &value);
    }
}

void f2()
{
    int value = 1;
    while(1) {
        if(value == 0)
            break;
        scanf("%d", &value);
    }
}

So they have the same performance.

Whenever in doubt, use compiler explorer to compare the assembly code. Or use a profiler.

Generally, if the compiler can determine the condition at compile-time, it generates a "branch always" instruction, which in itself is ever so slightly faster than a "branch if set"/"branch if equal" etc.

But more importantly, "branch always" means that the branch prediction feature of the CPU need not speculate about which instructions it will execute next. It can just load upcoming code from program memory into pre-fetch instruction cache, which is fast.

Whereas a "branch if set" would mean that the branch prediction would speculate and take a chance at which code that will get executed, and if it guessed wrong, the code will have to loaded from program memory instead of cache, which is much slower.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM