简体   繁体   English

调用 spin_lock_irqsave,而不是 local_irq_disable 后跟 spin_lock,对于每个处理器的结构是否相同?

[英]Is calling spin_lock_irqsave, instead of local_irq_disable followed by spin_lock, the same for a per-prorcessor struct?

Consider the following kernel code考虑以下 kernel 代码

local_irq_disable();
__update_rq_clock(rq);
spin_lock(&rq->lock);

rq is a pointer to a per-processor struct (ie; not subject to SMP concurrency). rq是指向每个处理器struct的指针(即;不受 SMP 并发的影响)。 Since rq will never be accessed in an another place after calling local_irq_disable (because rq is used by only a single processor and disabling local interrupts means no interrupt handlers will run on that CPU), then what is the point of embedding __update_rq_clock between the previous functions?由于在调用local_irq_disablerq将永远不会在另一个地方被访问(因为rq仅由单个处理器使用并且禁用本地中断意味着该 CPU 上不会运行任何中断处理程序),那么在前面的函数之间嵌入__update_rq_clock的意义何在? In other words, what difference does it make from the following, which disables interrupts and takes the lock in a single call, given that rq is safe in both cases inside __update_rq_clock either locked or not?换句话说,鉴于rq__update_rq_clock内的两种情况下都是安全的,无论是否被锁定,它与以下内容有何不同?

spin_lock_irqsave(&rq->lock, flags);
__update_rq_clock(rq);

First and foremost: the two examples you show have different semantics: local_irq_disable does not save the old state of IRQs.首先也是最重要的:您展示的两个示例具有不同的语义: local_irq_disable不保存旧的 state 的 IRQ。 In other words, when the corresponding local_irq_enable function gets called, it will forcibly re-enable IRQs (whether they were already disabled or not).换句话说,当相应的local_irq_enable function被调用时,它将强制重新启用IRQ(无论它们是否已经被禁用)。 On the other hand, spin_lock_irqsave does save the old IRQ state, so it can later be restored through spin_unlock_irqrestore .另一方面, spin_lock_irqsave确实保存了旧的 IRQ state,因此以后可以通过spin_unlock_irqrestore恢复它。 For this reason, the two pieces of code you show are very different, and it doesn't make much sense to compare them.出于这个原因,您展示的两段代码非常不同,比较它们没有多大意义。

Now, coming to the real problem:现在,进入真正的问题:

Since rq will never be accessed in an another place after calling local_irq_disable (because rq is used by only a single processor and disabling local interrupts means no interrupt handlers will run on that CPU)由于在调用local_irq_disablerq将永远不会在另一个地方被访问(因为rq仅由单个处理器使用,并且禁用本地中断意味着不会在该 CPU 上运行中断处理程序)

This is not always true.这并非总是如此。 There isn't a "magic barrier" which stops CPUs from accessing another CPU's per-CPU data.没有“魔法屏障”可以阻止 CPU 访问另一个 CPU 的每个 CPU 数据。 It is still possible, and in such case extra care must be taken by means of a proper locking mechanism.这仍然是可能的,在这种情况下,必须通过适当的锁定机构格外小心。

While per-CPU variables are usually meant to provide fast access to an object for a single CPU, and therefore can have the advantage of not requiring locking, there is nothing other than convention that keeps processors from digging around in other processors' per-CPU data ( quote ).虽然 per-CPU 变量通常旨在为单个 CPU 提供对 object 的快速访问,因此可以具有不需要锁定的优势,但没有什么比约定可以阻止处理器挖掘其他处理器的 per-CPU数据(报价)。

Runqueues are a great example of this: since the scheduler often needs to migrate tasks from one runqueue to another, it certainly will need to access two runqueues at the same time at some point.运行队列就是一个很好的例子:由于调度程序经常需要将任务从一个运行队列迁移到另一个运行队列,它肯定需要在某个时候同时访问两个运行队列。 Indeed, this is probably one of the reasons why struct rq has a .lock field.事实上,这可能是struct rq有一个.lock字段的原因之一。

In fact, doing an rq clock update without holding rq->lock seems to be considered a bug in recent kernel code, as you can see from this lockdep assertion in update_rq_clock() :事实上,在最近的 kernel 代码中,在不持有rq->lock的情况下进行 rq 时钟更新似乎被认为是一个错误,正如您可以从update_rq_clock()中的lockdep 断言中看到的那样:

void update_rq_clock(struct rq *rq)
{
    s64 delta;

    lockdep_assert_held(&rq->lock);

    // ...

It feels like the statements you show in your first code snippet should be re-ordered to lock first and then update, but the code is quite old (v2.6.25), and the call to __update_rq_clock() seems to be deliberately made before acquiring the lock.感觉您在第一个代码片段中显示的语句应该重新排序以先锁定然后更新,但是代码很旧(v2.6.25),并且对__update_rq_clock()的调用似乎是在获取之前故意进行的锁。 Hard to tell why, but maybe the old runqueue semantics did not require locking in order to update .lock / .prev_clock_raw , and thus the locking was done afterwards just to minimize the size of the critical section.很难说为什么,但也许旧的运行队列语义不需要锁定来更新.lock / .prev_clock_raw ,因此锁定是在之后完成的,只是为了最小化临界区的大小。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 spin_lock_irqsave中的flag参数保存哪些信息? - What information does flag argument in spin_lock_irqsave save? Spin_lock 和互斥锁顺序 - Spin_lock and mutex lock order 在Linux内核中,spin_lock_irqsave()是否可以保护我免受信号处理程序,页面错误,对schedule()的调用的影响? - In the Linux kernel, does spin_lock_irqsave() protect me from signal handlers, page faults, calls to schedule()? 在工作队列中使用spin_lock()vs down_interruptible() - Use spin_lock() vs down_interruptible() in workqueue Linux 内核:自旋锁 SMP:为什么在 spin_lock_irq SMP 版本中有一个 preempt_disable()? - Linux Kernel: Spinlock SMP: Why there is a preempt_disable() in spin_lock_irq SMP version? 在这种特殊情况下,我应该使用spin_lock还是Mutex_lock吗? - Should I use spin_lock or mutex_lock for this particular situation? 使用xchg旋转锁定 - Spin Lock using xchg 旋转锁实现(OSSpinLock) - Spin Lock Implementations (OSSpinLock) 多线程自旋锁? - Multithreaded spin lock? 当内核模块在不禁用中断的情况下持有 spin_lock 时,从用户空间发出 IOCTL 时是否会发生上下文切换? - Does context switching occurs when IOCTL is issued from user space while kernel module is holding a spin_lock without disabling interrupts?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM