简体   繁体   中英

Race condition when incrementing and decrementing global variable in C++

I found an example of a race condition that I was able to reproduce under g++ in linux. What I don't understand is how the order of operations matter in this example.

int va = 0;

void fa() {
    for (int i = 0; i < 10000; ++i)
        ++va;
}

void fb() {
    for (int i = 0; i < 10000; ++i)
        --va;
}

int main() {
    std::thread a(fa);
    std::thread b(fb);
    a.join();
    b.join();
    std::cout << va;
}

I can undertand that the order matters if I had used va = va + 1; because then RHS va could have changed before getting back to assigned LHS va . Can someone clarify?

The standard says (quoting the latest draft):

[intro.races]

Two expression evaluations conflict if one of them modifies a memory location ([intro.memory]) and the other one reads or modifies the same memory location.

The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below. Any such data race results in undefined behavior .

Your example program has a data race, and the behaviour of the program is undefined.


What I don't understand is how the order of operations matter in this example.

The order of operations matters because the operations are not atomic, and they read and modify the same memory location.

can undertand that the order matters if I had used va = va + 1; because then RHS va could have changed before getting back to assigned LHS va

The same applies to the increment operator. The abstract machine will:

  • Read a value from memory
  • Increment the value
  • Write a value back to memory

There are multiple steps there that can interleave with operations in the other thread.

Even if there was a single operation per thread, there would be no guarantee of well defined behaviour unless those operations are atomic.


Note outside of the scope of C++: A CPU might have a single instruction for incrementing an integer in memory. For example, x86 has such instruction. It can be invoked both atomically and non-atomically. It would be wasteful for the compiler to use the atomic instruction unless you explicitly use atomic operations in C++.

The important idea here is that when c++ is compiled it is "translated" to assembly language. The translation of ++va or --va will result in assembly code that moves the value of va to a register, then stores the result of adding 1 to that register back to va in a separate instruction. In this way, it is exactly the same as va = va + 1; . It also means that the operation va++ is not necessarily atomic .

See here for an explanation of what the Assembly code for these instructions will look like.

In order to make atomic operations, the variable could use a locking mechanism. You can do this by declaring an atomic variable (which will handle synchronization of threads for you):

std::atomic<int> va;

Reference: https://en.cppreference.com/w/cpp/atomic/atomic

First of all, this is undefined behaviour since the two threads' reads and writes of the same non-atomic variable va are potentially concurrent and neither happens before the other.

With that being said, if you want to understand what your computer is actually doing when this program is run, it may help to assume that ++va is the same as va = va + 1 . In fact, the standard says they are identical, and the compiler will likely compile them identically. Since your program contains UB, the compiler is not required to do anything sensible like using an atomic increment instruction. If you wanted an atomic increment instruction, you should have made va atomic. Similarly, --va is the same as va = va - 1 . So in practice, various results are possible.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM