简体   繁体   中英

Clarifications on dispatch_queue, reentrancy and deadlocks

I need a clarifications on how dispatch_queue s is related to reentrancy and deadlocks.

Reading this blog post Thread Safety Basics on iOS/OS X , I encountered this sentence:

All dispatch queues are non-reentrant, meaning you will deadlock if you attempt to dispatch_sync on the current queue.

So, what is the relationship between reentrancy and deadlock? Why, if a dispatch_queue is non-reentrant, does a deadlock arise when you are using dispatch_sync call?

In my understanding, you can have a deadlock using dispatch_sync only if the thread you are running on is the same thread where the block is dispatch into.

A simple example is the following. If I run the code in the main thread, since the dispatch_get_main_queue() will grab the main thread as well and I will end in a deadlock.

dispatch_sync(dispatch_get_main_queue(), ^{

    NSLog(@"Deadlock!!!");

});

Any clarifications?

All dispatch queues are non-reentrant, meaning you will deadlock if you attempt to dispatch_sync on the current queue.

So, what is the relationship between reentrancy and deadlock? Why, if a dispatch_queue is non-reentrant, does a deadlock arise when you are using dispatch_sync call?

Without having read that article, I imagine that statement was in reference to serial queues, because it's otherwise false.

Now, let's consider a simplified conceptual view of how dispatch queues work (in some made-up pseudo-language). We also assume a serial queue, and don't consider target queues.

Dispatch Queue

When you create a dispatch queue, basically you get a FIFO queue, a simple data structure where you can push objects on the end, and take objects off the front.

You also get some complex mechanisms to manage thread pools and do synchronization, but most of that is for performance. Let's simply assume that you also get a thread that just runs an infinite loop, processing messages from the queue.

void processQueue(queue) {
    for (;;) {
        waitUntilQueueIsNotEmptyInAThreadSaveManner(queue)
        block = removeFirstObject(queue);
        block();
    }
}

dispatch_async

Taking the same simplistic view of dispatch_async yields something like this...

void dispatch_async(queue, block) {
    appendToEndInAThreadSafeManner(queue, block);
}

All it is really doing is taking the block, and adding it to the queue. This is why it returns immediately, it just adds the block onto the end of the data structure. At some point, that other thread will pull this block off the queue, and execute it.

Note, that this is where the FIFO guarantee comes into play. The thread pulling blocks off the queue and executing them always takes them in the order that they were placed on the queue. It then waits until that block has fully executed before getting the next block off the queue

dispatch_sync

Now, another simplistic view of dispatch_sync . In this case, the API guarantees that it will wait until the block has run to completion before it returns. In particular, calling this function does not violate the FIFO guarantee.

void dispatch_sync(queue, block) {
    bool done = false;
    dispatch_async(queue, { block(); done = true; });
    while (!done) { }
}

Now, this is actually done with semaphores so there is no cpu loops and boolean flag, and it doesn't use a separate block, but we are trying to keep it simple. You should get the idea.

The block is placed on the queue, and then the function waits until it knows for sure that "the other thread" has run the block to completion.

Reentrancy

Now, we can get a reentrant call in a number of different ways. Let's consider the most obvious.

block1 = {
    dispatch_sync(queue, block2);
}
dispatch_sync(queue, block1);

This will place block1 on the queue, and wait for it to run. Eventually the thread processing the queue will pop block1 off, and start executing it. When block1 executes, it will put block2 on the queue, and then wait for it to finish executing.

This is one meaning of reentrancy: when you re-enter a call to dispatch_sync from another call to dispatch_sync

Deadlock from reentering dispatch_sync

However, block1 is now running inside the queue's for loop. That code is executing block1, and will not process anything more from the queue until block1 completes.

Block1, though, has placed block2 on the queue, and is waiting for it to complete. Block2 has indeed been placed on the queue, but it will never be executed. Block1 is "waiting" for block2 to complete, but block2 is sitting on a queue, and the code that pulls it off the queue and executes it will not run until block1 completes.

Deadlock from NOT reentering dispatch_sync

Now, what if we change the code to this...

block1 = {
    dispatch_sync(queue, block2);
}
dispatch_async(queue, block1);

We are not technically reentering dispatch_sync . However, we still have the same scenario, it's just that the thread that kicked off block1 is not waiting for it to finish.

We are still running block1, waiting for block2 to finish, but the thread that will run block2 must finish with block1 first. This will never happen because the code to process block1 is waiting for block2 to be taken off the queue and executed.

Thus reentrancy for dispatch queues is not technically reentering the same function, but reentering the same queue processing.

Deadlocks from NOT reentering the queue at all

In it's most simple case (and most common), let's assume [self foo] gets called on the main thread, as is common for UI callbacks.

-(void) foo {
    dispatch_sync(dispatch_get_main_queue(), ^{
        // Never gets here
    });
}

This doesn't "reenter" the dispatch queue API, but it has the same effect. We are running on the main thread. The main thread is where the blocks are taken off the main queue and processed. The main thread is currently executing foo and a block is placed on the main-queue, and foo then waits for that block to be executed. However, it can only be taken off the queue and executed after the main thread gets done with its current work.

This will never happen because the main thread will not progress until `foo completes, but it will never complete until that block it is waiting for runs... which will not happen.

In my understanding, you can have a deadlock using dispatch_sync only if the thread you are running on is the same thread where the block is dispatch into.

As the aforementioned example illustrates, that's not the case.

Furthermore, there are other scenarios that are similar, but not so obvious, especially when the sync access is hidden in layers of method calls.

Avoiding deadlocks

The only sure way to avoid deadlocks is to never call dispatch_sync (that's not exactly true, but it's close enough). This is especially true if you expose your queue to users.

If you use a self-contained queue, and control its use and target queues, you can maintain some control when using dispatch_sync .

There are, indeed, some valid uses of dispatch_sync on a serial queue, but most are probably unwise, and should only be done when you know for certain that you will not be 'sync' accessing the same or another resource (the latter is known as deadly embrace).

EDIT

Jody, Thanks a lot for your answer. I really understood all of your stuff. I would like to put more points...but right now I cannot. 😢 Do you have any good tips in order to learn this under the hood stuff? – Lorenzo B.

Unfortunately, the only books on GCD that I've seen are not very advanced. They go over the easy surface level stuff on how to use it for simple general use cases (which I guess is what a mass market book is supposed to do).

However, GCD is open source. Here is the webpage for it , which includes links to their svn and git repositories. However, the webpage looks old (2010) and I'm not sure how recent the code is. The most recent commit to the git repository is dated Aug 9, 2012.

I'm sure there are more recent updates; but not sure where they would be.

In any event, I doubt the conceptual frameworks of the code has changed much over the years.

Also, the general idea of dispatch queues is not new, and has been around in many forms for a very long time.

Many moons ago, I spent my days (and nights) writing kernel code (worked on what we believe to have been the very first symmetric multiprocessing implementation of SVR4), and then when I finally breached the kernel, I spent most of my time writing SVR4 STREAMS drivers (wrapped by user space libraries). Eventually, I made it fully into user space, and built some of the very first HFT systems (though it wasn't called that back then).

The dispatch queue concept was prevalent in every bit of that. It's emergence as a generally available user space library is only a somewhat recent development.

Edit #2

Jody, thanks for your edit. So, to recap a serial dispatch queue is not reentrant since it could produce an invalid state (a deadlock). On the contrary, an reentrant function will not produce it. Am I right? – Lorenzo B.

I guess you could say that, because it does not support reentrant calls.

However, I think I would prefer to say that the deadlock is the result of preventing invalid state. If anything else occurred, then either the state would be compromised, or the definition of the queue would be violated.

Core Data's performBlockAndWait

Consider -[NSManagedObjectContext performBlockAndWait] . It's non-asynchronous, and it is reentrant. It has some pixie dust sprinkled around the queue access so that the second block runs immediately, when called from "the queue." Thus, it has the traits I described above.

[moc performBlock:^{
    [moc performBlockAndWait:^{
        // This block runs immediately, and to completion before returning
        // However, `dispatch_async`/`dispatch_sync` would deadlock
    }];
}];

The above code does not "produce a deadlock" from reentrancy (but the API can't avoid deadlocks entirely).

However, depending on who you talk to, doing this can produce invalid (or unpredictable/unexpected) state. In this simple example, it's clear what's happening, but in more complicated parts it can be more insidious.

At the very least, you must be very careful about what you do inside a performBlockAndWait .

Now, in practice, this is only a real issue for main-queue MOCs, because the main run loop is running on the main queue, so performBlockAndWait recognizes that and immediately executes the block. However, most apps have a MOC attached to the main queue, and respond to user save events on the main queue.

If you want to watch how dispatch queues interact with the main run loop, you can install a CFRunLoopObserver on the main run loop, and watch how it processes the various input sources in the main run loop.

If you've never done that, it's an interesting and educational experiment (though you can't assume what you observe will always be that way).

Anyway, I generally try to avoid both dispatch_sync and performBlockAndWait .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM