[英]How does the lock statement ensure intra processor synchronization?
I have a small test application that executes two threads simultaneously. 我有一个小的测试应用程序,它同时执行两个线程。 One increments a static long _value
, the other one decrements it. 一个增加static long _value
,另一个递减它。 I've ensured with ProcessThread.ProcessorAffinity
that the threads are associated with different physical (no HT) cores to force intra processor communication and I have ensured that they overlap in execution time for a significant amount of time. 我已经确保使用ProcessThread.ProcessorAffinity
,线程与不同的物理(无HT)内核相关联,以强制进行内部处理器通信,并确保它们在执行时间内重叠了很长时间。
Of course, the following does not lead to zero: 当然,以下不会导致零:
for (long i = 0; i < 10000000; i++)
{
_value += offset;
}
So, the logical conclusion would be to: 因此,合乎逻辑的结论是:
for (long i = 0; i < 10000000; i++)
{
Interlocked.Add(ref _value, offset);
}
Which of course leads to zero. 这当然导致零。
However, the following also leads to zero: 但是,以下内容也会导致零:
for (long i = 0; i < 10000000; i++)
{
lock (_syncRoot)
{
_value += offset;
}
}
Of course, the lock
statement ensures that the reads and writes are not reordered because it employs a full fence. 当然, lock
语句确保读取和写入不会重新排序,因为它使用完整的fence。 However, I cannot find any information concerning synchronization of processor caches. 但是,我找不到任何有关处理器缓存同步的信息。 If there wouldn't be any cache synchronization, I'd think I should be seeing deviation from 0 after both threads were finished? 如果没有任何缓存同步,我认为在两个线程完成后我应该看到偏离0?
Can someone explain to me how lock
/ Monitor.Enter/Exit
ensures that processor caches (L1/L2 caches) are synchronized? 有人可以向我解释lock
/ Monitor.Enter/Exit
如何确保处理器缓存(L1 / L2缓存)同步?
Cache coherence in this case does not depend on lock
. 在这种情况下,缓存一致性不依赖于lock
。 If you use lock
statement it ensures that your assembler commands are not mixed. 如果使用lock
语句,则可确保汇编器命令不会混合。 a += b
is not an atomic to processor, it looks like: a += b
不是处理器的原子,它看起来像:
And without lock it may be: 没有锁定它可能是:
But it's not about cache coherence, it's a more high-level feature. 但它不是关于缓存一致性,而是一个更高级别的功能。
So, lock
does not ensures that the caches are synchronized. 因此, lock
不能确保缓存是同步的。 Cache synchronization is a processor internal feature which does not depend on code. 缓存同步是处理器内部功能,不依赖于代码。 You can read about it here . 你可以在这里阅读它。
When one core writes a value to memory and then when the second core try to read that value it won't have the actual copy in its cache unless its cache entry is invalidated so a cache miss occurs. 当一个内核将值写入内存,然后当第二个内核尝试读取该值时,它将不会在其高速缓存中具有实际副本,除非其高速缓存条目无效,因此发生高速缓存未命中。 And this cache miss forces cache entry to be updated to actual value. 并且此缓存未命中强制缓存条目将更新为实际值。
The CLR memory model guarantees (requires) that loads/stores can't cross a fence . CLR内存模型保证(要求)加载/存储不能越过围栏 。 It's up to the CLR implementers to enforce this on real hardware, which they do . CLR实施者可以在真实硬件上强制执行此操作 。 However, this is based on the advertised / understood behavior of the hardware, which can be wrong . 但是,这是基于硬件的广告/理解行为,这可能是错误的 。
The lock
keyword is just syntactic sugar for a pair of System.Threading.Monitor.Enter()
and System.Threading.Monitor.Exit()
calls. lock
关键字只是一对System.Threading.Monitor.Enter()
和System.Threading.Monitor.Exit()
调用的语法糖。 The implementations of Monitor.Enter()
and Monitor.Exit()
put up a memory fence which entails performing architecture appropriate cache flushing. Monitor.Enter()
和Monitor.Exit()
建立了一个内存栏,需要执行适当的体系结构缓存刷新。 So your other thread won't proceed until it can see the stores that results from the execution of the locked section. 所以你的其他线程将不会继续,直到它可以看到执行锁定部分产生的商店。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.