简体   繁体   English

mmap共享缓冲区读取问题

[英]mmap shared buffer read problems

I have a kernel module that allocates a large buffer of memory, this buffer is then mmap-ed into userspace. 我有一个内核模块,该模块分配了很大的内存缓冲区,然后将该缓冲区映射到用户空间。
The module recieves some data from hardware, and then puts the new data into the buffer with a flag in front of it. 该模块从硬件接收一些数据,然后将新数据放入缓冲区中,并在其前面带有一个标志。 (memory is initialized to zero, flag is 1). (内存初始化为零,标志为1)。

The userspace program reads the flag in a loop before returning a pointer to valid data 用户空间程序在返回指向有效数据的指针之前先在循环中读取标志

simplified version of the code: 代码的简化版:

uint8_t * getData()
{
    while(1)
   {
      if(*((volatile uint32_t*)this->buffer) == 1)
          return this->buffer+sizeof(uint32_t);
   }
}

the memory region is mapped as shared and a full buffer memory dump confirms that the buffer is written to correctly. 内存区域被映射为共享,并且完整的缓冲区内存转储确认缓冲区已正确写入。

The problem is that after a certain number of correct reads, this function stops returning. 问题在于,经过一定数量的正确读取后,此函数将停止返回。
Could this be due to CPU caching? 这可能是由于CPU缓存引起的吗? Is there a way to circumvent that and make sure that the read is made directly from RAM each time and not from cache? 有没有一种方法可以避免这种情况,并确保每次都是直接从RAM而不是从缓存进行读取?

Yes it's likely due to the cpu cache on the reader side. 是的,这可能是由于读取器端的cpu缓存。 One might think the "volatile" keyword should protect against this sort of problem but that's not quite right since volatile is simply a directive to the compiler not to registerize the variable, not quite the same thing as directing the cpu to read directly from main memory every time. 有人可能会认为“ volatile”关键字应该可以防止此类问题,但这并不是很正确,因为volatile只是指示编译器不要注册变量,这与指示cpu直接从主内存中读取并不完全相同每次。

The problem needs to be solved on the write side. 该问题需要在写端解决。 From your description, it sounds like the write is happening in the kernel module and read from the user side. 根据您的描述,听起来好像是在内核模块中进行写操作并从用户端读取。 If these two operations are happening on different cpus (different caching domains), and there's nothing to trigger a cache invalidation on the read side, you'll get stuck on the read side as you are describing. 如果这两个操作发生在不同的cpus(不同的缓存域)上,并且没有什么可以触发读取端的缓存失效,那么您将被描述为卡在读取端。 You need to force a store buffer flush on the linux kernel after your store instruction. 在您执行存储指令后,您需要在Linux内核上强制刷新存储缓冲区。 Assuming it's the linux kernel, inserting a call to smp_mb right after you've set the flag and the value from the module will most likely do the right thing on all architectures. 假设它是Linux内核,则在设置了标志之后立即插入对smp_mb的调用,模块中的值很可能在所有体系结构上都可以正确执行。

提醒用户空间应用程序存在更多数据的更好方法是,将其阻止在内核模块提供的文件描述符的read() ,当更多数据可用时,内核模块将其唤醒。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM