简体   繁体   English

JAVA中REENTRANT LOCK中的fairness参数的作用是什么?

[英]What is the purpose of fairness parameter in REENTRANT LOCK in JAVA?

I found the following text while going through Java doc of Reentrant lock:我在浏览可重入锁的 Java 文档时发现了以下文本:

fairness of locks does not guarantee fairness of thread scheduling.锁的公平性并不能保证线程调度的公平性。 Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock.因此,使用公平锁的许多线程之一可能会连续多次获得它,而其他活动线程没有进展并且当前没有持有锁。

As per my understanding it means, if the OS scheduler schedules the same thread (which was previously acquiring the lock) and it tries acquire the same lock again, Java would allow it to acquire and won't obey the fairness parameter value.根据我的理解,这意味着,如果操作系统调度程序调度相同的线程(之前获取锁)并尝试再次获取相同的锁,Java 将允许它获取并且不会遵守公平参数值。 Could someone please tell what could be the purpose of fairness parameter then and in what condition one should use it.有人可以告诉我们公平参数的目的是什么,以及应该在什么条件下使用它。
I am just thinking if its just like a priority value, which might influence the scheduler but cant guarantee the thread execution order.我只是在想它是否就像一个优先级值,这可能会影响调度程序但不能保证线程执行顺序。

fairness of locks does not guarantee fairness of thread scheduling.锁的公平性并不能保证线程调度的公平性。 Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock.因此,使用公平锁的许多线程之一可能会连续多次获得它,而其他活动线程没有进展并且当前没有持有锁。

I interpret "not progressing" to mean, "not progressing for reasons not related to the lock in question. " I think they're trying to tell you that "fairness" only means anything when the lock is so heavily contested that there often are one or more threads awaiting their turn to lock it.我将“没有进展”解释为“由于与所讨论的锁无关的原因而没有进展。 ”我认为他们试图告诉你,“公平”仅在锁受到如此激烈的竞争以致经常存在时才有意义。一个或多个线程等待轮到它们锁定它。

If thread T releases a "fair" lock that no other thread currently is awaiting, then "fairness" has no impact on which thread will get it next.如果线程 T 释放当前没有其他线程正在等待的“公平”锁,则“公平”不会影响下一个线程将获得它。 That's just a straight-up race between the threads, as moderated by the OS scheduler.这只是线程之间的直接竞赛,由操作系统调度程序主持。

It's only when multiple threads are waiting that a fair lock is supposed to "favor" the one that's been waiting the longest.只有当多个线程正在等待时,一个公平的锁才应该“支持”等待时间最长的那个。 In particular, I would hope that if some thread T releases a "fair" lock that other threads are awaiting, and then thread T immediately tries to lock it again, that the lock() function would notice the other waiting threads, and send T to the back of the queue.特别是,我希望如果某个线程 T 释放其他线程正在等待的“公平”锁,然后线程 T立即尝试再次锁定它,那么lock() function 会注意到其他等待线程,并发送 T到队列的后面。

But, I don't actually know how it is implemented in any particular JVM.但是,我实际上并不知道它是如何在任何特定的 JVM 中实现的。


PS, IMO, "fairness" is like a bandage to stop the bleeding from a compound fracture. PS,IMO,“公平”就像绷带,用于止血复合骨折。 If your program has a lock that is so heavily contested that "fairness" would make any difference, then that's a serious design flaw.如果您的程序有一个竞争激烈的锁,以至于“公平”会产生任何影响,那么这是一个严重的设计缺陷。

The same Javadoc also says, 同一个 Javadoc还说,

Programs using fair locks accessed by many threads may display lower overall throughput (ie, are slower; often much slower) than those using the default setting.使用由许多线程访问的公平锁的程序可能会显示出比使用默认设置的程序更低的整体吞吐量(即,速度较慢;通常要慢得多)。

In a naïve view, the behavior of threads using a fair lock would be like在幼稚的观点中,使用公平锁的线程的行为就像

Thread 1线程 1 Thread 2线程 2 Thread 3线程 3
Acquire获得 Do something做一点事 Do something做一点事
Critical Section临界区 Try Acquire尝试获取 Do something做一点事
Critical Section临界区 Blocked被封锁 Try Acquire尝试获取
Release发布 Acquire获得 Blocked被封锁
Do something做一点事 Critical Section临界区 Blocked被封锁
Try Acquire尝试获取 Release发布 Acquire获得
Blocked被封锁 Do something做一点事 Critical Section临界区
Acquire获得 Do something做一点事 Release发布

“Try Acquire” refers to a call to lock() that does not immediately succeed because another thread owns the lock. “Try Acquire”是指调用lock()不会立即成功,因为另一个线程拥有锁。 It does not refer to tryLock() which isn't fair in general.它没有引用tryLock() ,这通常是不公平的。

In this naïve view, the threads get the lock in the order “Thread 1”, “Thread 2”, “Thread 3”, because that's the order of acquisition attempts.在这种幼稚的观点中,线程按“线程 1”、“线程 2”、“线程 3”的顺序获取锁,因为这是获取尝试的顺序。 Especially when “Thread 1” tries to acquire the lock right at the time “Thread 2” releases it, it won't overtake as would happen with an unfair lock, but rather, “Thread 3” gets it because it waits longer.尤其是当“线程 1”试图在“线程 2”释放它的同时获取锁时,它不会像不公平锁那样超过,而是“线程 3”得到它,因为它等待的时间更长。

But, as the documentation says, thread scheduling is not fair.但是,正如文档所说,线程调度是不公平的。 So the following may happen instead.因此,可能会发生以下情况。

Thread 1线程 1 Thread 2线程 2 Thread 3线程 3
Acquire获得 Do something做一点事 Do something做一点事
Critical Section临界区 Do something做一点事
Critical Section临界区
Release发布
Do something做一点事
Acquire获得 Try Acquire尝试获取 Try Acquire尝试获取
Critical Section临界区 Blocked被封锁 Blocked被封锁
Critical Section临界区 Blocked被封锁 Blocked被封锁

The empty cells represent phases in which the threads simply do not get any CPU time.空单元格表示线程根本没有获得任何 CPU 时间的阶段。 There might be more threads than CPU cores, which includes the threads of other processes.线程可能比 CPU 内核多,其中包括其他进程的线程。 The operating system may even prefer to let “Thread 1” continue on a core rather than switching to the other threads, simply because that thread does already run and switching takes time.操作系统甚至可能更喜欢让“线程 1”在一个核心上继续运行,而不是切换到其他线程,这仅仅是因为该线程确实已经运行并且切换需要时间。

Generally, it's not a good idea to try to predict the relative timing of reaching a certain point like the lock acquisition by the preceding workload.通常,尝试预测到达某个点的相对时间(例如前面的工作负载获取锁)并不是一个好主意。 In an environment with an optimizing JIT compiler, even two threads executing exactly the same code with exactly the same input may have entirely different execution times.在具有优化 JIT 编译器的环境中,即使两个线程使用完全相同的输入执行完全相同的代码也可能具有完全不同的执行时间。

So when we can't predict the time of lock() attempts, it's not very useful to insist on the lock to get acquired in that unpredictable, unknown order.因此,当我们无法预测lock()尝试的时间时,坚持以不可预测的未知顺序获取锁并不是很有用。 One explanation why developers still want fairness, is that even when the resulting order is not predictable, it should ensure that every thread makes progress instead of infinitely waiting for a lock while other threads are repeatedly overtaking.开发人员仍然希望公平的一种解释是,即使结果顺序不可预测,它也应该确保每个线程都取得进展,而不是在其他线程反复超车时无限等待锁。 But this brings us back to the unfair thread scheduling;但这让我们回到了不公平的线程调度; even when there is no lock at all, there is no guaranty that all threads make progress.即使根本没有锁,也不能保证所有线程都能取得进展。

So why does the fairness option still exist?那么为什么公平选项仍然存在呢? Because sometimes, people are fine with the way it works in most cases, even when there is no strong guaranty that it will always work that way.因为有时,人们对它在大多数情况下的工作方式感到满意,即使没有强有力的保证它会一直以这种方式工作。 Or simply, because developers would repeatedly ask for it if it didn't exist.或者简单地说,因为如果它不存在,开发人员会反复要求它。 Supporting fairness doesn't cost much and doesn't affect the performance of the unfair locks.支持公平的成本并不高,也不会影响不公平锁的性能。

The ReentrantLock is implemented based on AbstractQueuedSynchronizer , which is a first-in-first-out (FIFO) wait queue. ReentrantLock是基于AbstractQueuedSynchronizer实现的,它是一个先进先出 (FIFO) 等待队列。

Let's say that three threads A, B, and C try to acquire the lock successively, and A acquires the lock, then B, C will be transformed into an AbstractQueuedSynchronizer#Node into the queue.假设三个线程A、B、C依次尝试获取锁,A获取了锁,那么B、C会转化为AbstractQueuedSynchronizer#Node进入队列。 These two threads will be suspended.这两个线程将被挂起。

When the A thread releases the lock, it will wake up its successor node( AbstractQueuedSynchronizer#unparkSuccessor ), that is, thread B. Thread B will try to acquire the lock again after it is awakened.当A线程释放锁时,它会唤醒它的后继节点( AbstractQueuedSynchronizer#unparkSuccessor ),即线程B。线程B在被唤醒后会再次尝试获取锁。

Suppose that when the B thread is awakened, a D thread suddenly comes to try to acquire this lock.假设当B线程被唤醒时,突然有一个D线程来尝试获取这个锁。 For a fair lock, the D thread sees that there are other nodes in the queue waiting to acquire the lock( AbstractQueuedSynchronizer#hasQueuedPredecessors ), and will be directly hung up.对于公平锁,D线程看到队列中有其他节点在等待获取锁( AbstractQueuedSynchronizer#hasQueuedPredecessors ),直接挂掉。

And for unfair lock, the D thread will immediately try to acquire this lock, which means that it can try to "jump the queue" once.而对于不公平锁,D线程会立即尝试获取这个锁,这意味着它可以尝试“跳队列”一次。 If this "queue jump" is successful, then can immediately acquire the lock(This means that node B will be suspended again: it lost in the competition with the D thread, it was cut in line).如果这次“队列跳转”成功,那么可以立即获取锁(这意味着节点B将再次被挂起:它在与D线程的竞争中输了,它被切线了)。 If it fails, it will be suspended and enter the queue as a Node.如果失败,它将被挂起并作为节点进入队列。

Why is unfair lock perform better and when to use fair lock?为什么不公平锁表现更好以及何时使用公平锁?

This is from Java-Concurrency-Practice :这是来自Java-Concurrency-Practice

One reason barging locks perform so much better than fair locks under heavy contention is that there can be a significant delay between when a suspended thread is resumed and when it actually runs.在激烈的争用下,插入锁比公平锁执行得更好的一个原因是,在暂停的线程恢复和实际运行之间可能存在显着延迟。 Let's say thread A holds a lock and thread B asks for that lock.假设线程 A 持有一个锁,而线程 B 请求该锁。 Since the lock is busy, B is suspended.由于锁忙,B被挂起。 When A releases the lock, B is resumed so it can try again.当 A 释放锁时,B 会恢复,以便再次尝试。 In the meantime, though, if thread C requests the lock, there is a good chance that C can acquire the lock, use it, and release it before B even finishes waking up.与此同时,如果线程 C 请求锁,那么 C 很有可能在 B 完成唤醒之前获取锁,使用它并释放它。 In this case, everyone wins: B gets the lock no later than it otherwise would have, C gets it much earlier, and throughput is improved.在这种情况下,每个人都赢了:B 获得锁的时间不会晚于其他情况,C 获得锁的时间要早得多,从而提高了吞吐量。

Fair locks tend to work best when they are held for a relatively long time or when the mean time between lock requests is relatively long.当公平锁被持有的时间相对较长或锁请求之间的平均时间相对较长时,它们往往工作得最好。 In these cases, the condition under which barging provides a throughput advantage ‐ when the lock is unheld but a thread is currently waking up to claim it ‐ is less likely to hold.在这些情况下,插入提供吞吐量优势的条件 - 当锁未持有但线程当前正在唤醒以声明它时 - 不太可能持有。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM