简体   繁体   English

是否在同步GCD队列上设置了阻止阻塞和暂停其他队列的阻塞?

[英]Does puting a block on a sync GCD queue locks that block and pauses the others?

I read that GCD synchronous queues (dispatch_sync) should be used to implement critical sections of code. 我读到应该使用GCD同步队列(dispatch_sync)来实现代码的关键部分。 An example would be a block that subtracts transaction amount from account balance. 一个例子是从帐户余额中减去交易金额的块。 The interesting part of sync calls is a question, how does that affect the work of other blocks on multiple threads? 同步调用的有趣部分是一个问题,它如何影响多个线程上其他块的工作?

Lets imagine the situation where there are 3 threads that use and execute both system and user defined blocks from main and custom queues in asynchronous mode. 让我们假设有3个线程在异步模式下使用和执行主队列和自定义队列中的系统和用户定义块的情况。 Those block are all executed in parallel in some order. 这些块都以某种顺序并行执行。 Now, if a block is put on a custom queue with sync mode, does that mean that all other blocks (including on other threads) are suspended until the successful execution of the block? 现在,如果将块放在具有同步模式的自定义队列上,这是否意味着所有其他块(包括在其他线程上)都被挂起,直到块成功执行为止? Or does that mean that only some lock will be put on that block while other will still execute. 或者这是否意味着只有一些锁定将被放置在该块上,而其他锁定仍将执行。 However, if other blocks use the same data as the sync block then it's inevitable that other blocks will wait until that lock will be released. 但是,如果其他块使用与同步块相同的数据,那么其他块将不可避免地等到该锁将被释放。

IMHO it doesn't matter, is it one or multiple cores, sync mode should freeze the whole app work. 恕我直言无所谓,是一个还是多个核心,同步模式应该冻结整个应用程序的工作。 However, these are just my thoughts so please comment on that and share your insights :) 但是,这些只是我的想法所以请评论并分享您的见解:)

Synchronous dispatch suspends the execution of your code until the dispatched block has finished. 同步调度会暂停代码的执行,直到调度块完成。 Asynchronous dispatch returns immediately, the block is executed asynchronously with regard to the calling code: 异步调度立即返回,该块与调用代码异步执行:

dispatch_sync(somewhere, ^{ something });
// Reached later, when the block is finished.

dispatch_async(somewhere, ^{ something });
// Reached immediately. The block might be waiting
// to be executed, executing or already finished.

And there are two kinds of dispatch queues, serial and concurrent. 并且有两种调度队列,串行和并发。 The serial ones dispatch the blocks strictly one by one in the order they are being added. 串行的按照添加顺序严格逐个发送块。 When one finishes, another one starts. 当一个完成时,另一个开始。 There is only one thread needed for this kind of execution. 这种执行只需要一个线程。 The concurrent queues dispatch the blocks concurrently, in parallel. 并发队列并行地同时调度块。 There are more threads being used there. 那里使用了更多的线程。

You can mix and match sync/async dispatch and serial/concurrent queues as you see fit. 您可以根据需要混合和匹配同步/异步调度和串行/并发队列。 If you want to use GCD to guard access to a critical section, use a single serial queue and dispatch all operations on the shared data on this queue (synchronously or asynchronously, does not matter). 如果要使用GCD来保护对关键部分的访问,请使用单个串行队列并对此队列上的共享数据执行所有操作(同步或异步,无关紧要)。 That way there will always be just one block operating with the shared data: 这样,只有一个块与共享数据一起运行:

- (void) addFoo: (id) foo {
    dispatch_sync(guardingQueue, ^{ [sharedFooArray addObject:foo]; });
}

- (void) removeFoo: (id) foo {
    dispatch_sync(guardingQueue, ^{ [sharedFooArray removeObject:foo]; });
}

Now if guardingQueue is a serial queue, the add/remove operations can never clash even if the addFoo: and removeFoo: methods are called concurrently from different threads. 现在,如果guardingQueue是一个串行队列,即使从不同的线程同时调用addFoo:removeFoo:方法,添加/删除操作也永远不会发生冲突。

No it doesn't. 不,不。

The synchronised part is that the block is put on a queue but control does not pass back to the calling function until the block returns. 同步部分是将块放在队列中,但是在块返回之前,控制不会传递回调用函数。

Many uses of GCD are asynchronous; GCD的许多用途都是异步的; you put a block on a queue and rather than waiting for the block to complete it's work control is passed back to the calling function. 你把一个块放在队列上而不是等待块完成它的工作控制被传递回调用函数。

This has no effect on other queues. 这对其他队列没有影响。

If you need to serialize the access to a certain resource then there are at least two mechanisms that are accessible to you. 如果您需要序列化对某个资源的访问权限,那么至少有两种机制可供您访问。 If you have an account object (that is unique for a given account number), then you can do something like: 如果您有一个帐户对象(对于给定的帐号是唯一的),那么您可以执行以下操作:

@synchronize(accountObject) { ... }

If you don't have an object but are using a C structure for which there is only one such structure for a given account number then you can do the following: 如果您没有对象但使用的C结构对于给定的帐号只有一个这样的结构,那么您可以执行以下操作:

// Should be added to the account structure. 
// 1 => at most 1 object can access accountLock at a time.
dispatch_semaphore_t accountLock = dispatch_semaphore_create(1);

// In your block you do the following:
block = ^(void) {
    dispatch_semaphore_wait(accountLock,DISPATCH_TIME_FOREVER);
    // Do something
    dispatch_semaphore_signal(accountLock);
};

// -- Edited: semaphore was leaking.
// At the appropriate time release the lock
// If the semaphore was created in the init then 
// the semaphore should be released in the release method.
dispatch_release(accountLock);

With this, regardless of the level of concurrency of your queues, you are guaranteed that only one thread will access an account at any given time. 这样,无论队列的并发级别如何,都可以保证在任何给定时间只有一个线程可以访问帐户。

There are many more types of synchronization objects but these two are easy to use and quite flexible. 还有更多类型的同步对象,但这两种对象易于使用且非常灵活。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM