简体   繁体   中英

The definition of lock-free

There are three different types of "lock-free" algorithms. The definitions given in Concurrency in Action are:

  1. Obstruction-Free: If all other threads are paused, then any given thread will complete its operation in a bounded number of steps.
  2. Lock-Free: If multiple threads are operating on a data structure, then after a bounded number of steps one of them will complete its operation.
  3. Wait-Free: Every thread operating on a data structure will complete its operation in a bounded number of steps, even if other threads are also operating on the data structure.

Herb Sutter says in his talk Lock-Free Programming :

Informally, "lock-free" ≈ "doesn't use mutexes" == any of these.

I do not see why lock-based algorithms can't fall into the lock-free definition given above . Here is a simple lock-based program:

#include <iostream>
#include <mutex>
#include <thread>

std::mutex x_mut;

void print(int& x) {
    std::lock_guard<std::mutex> lk(x_mut);
    std::cout << x;
}

void add(int& x, int y) {
    std::lock_guard<std::mutex> lk(x_mut);
    x += y;
}

int main() {

    int i = 3;

    std::thread thread1{print, std::ref(i)};

    std::thread thread2(add, std::ref(i), 4);

    thread1.join();

    thread2.join();

}

If both of these threads are operating, then after a bounded number of steps, one of them must complete. Why does my program not satisfy the definition of "Lock-free"?

I would be careful about saying "bounded" without defining by what.

The canonical lock-free primitives - CAS loops do not give any bounds, if there is heavy contention, a thread can be repeatedly unlucky and wait forever, that is allowed. The defining property of lock-free algorithms is there is always system-wide progress . At any point of time, some thread will make progress.

Stronger guarantee of some progress for each thread at each point of time is called wait-free.

In other words, lock-free guarantees that a misbehaving thread cannot fatally impact all other threads, wait-free cannot fatally impact any thread.

If both of these threads are operating, then after a bounded number of steps, one of them must complete. Why does my program not satisfy the definition of "Lock-free"?

Because an (unfair) scheduler must be taken into account .* If a thread holding the lock is put to sleep, no other thread will be able to make any progress -> not-lock-free and there is certainly no bound. That won't happen with lock-free programming, resources are always available and unfortunate scheduling of one thread can make other threads' operations complete only faster, not slower.

* In particular, where the suspension time for any thread is not limited in frequency or duration. If it was, any algorithm would be wait-free (with some huge constant) by definition.

Your quote from Concurrency in Action is taken out of context.

In fact, what the book actually says is:

7.1 Definitions and consequences

Algorithms and data structures that use mutexes, condition variables, and futures to synchronize the data are called blocking data structures and algorithms.

Data structures and algorithms that don't use blocking library functions are said to be nonblocking . Not all these data structures are lock-free ...

Then it proceeds to further break down nonblocking algorithms into Obstruction-Free , Lock-Free and Wait-Free .

So a Lock-Free algorithm is

  1. a nonblocking algorithm (it does not use locks like a mutex) and
  2. it is able to make progress in at least one thread in a bounded number of steps.

So both Herb Sutter and Anthony Williams are correct.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM