简体   繁体   English

Linux:如何将一系列物理上连续的区域映射到用户空间?

[英]Linux: How to mmap a sequence of physically contiguous areas into user space?

In my driver I have certain number of physically contiguous DMA buffers (eg 4MB long each) to receive data from a device. 在我的驱动程序中,我有一定数量的物理上连续的DMA缓冲区(例如每个4MB长)以从设备接收数据。 They are handled by hardware using the SG list. 它们由使用SG列表的硬件处理。 As the received data will be subjected to intensive processing, I don't want to switch off cache and I will use dma_sync_single_for_cpu after each buffer is filled by DMA. 由于接收到的数据将受到密集处理,我不想关闭缓存,并且在DMA填充每个缓冲区后我将使用dma_sync_single_for_cpu

To simplify data processing, I want those buffers to appear as a single huge, contiguous, circular buffer in the user space. 为了简化数据处理,我希望这些缓冲区在用户空间中显示为一个巨大的,连续的循环缓冲区。 In case of a single buffer I simply use remap_pfn_range or dma_mmap_coherent . 在单个缓冲区的情况下,我只使用remap_pfn_rangedma_mmap_coherent However, I can't use those functions multiple times to map consecutive buffers. 但是,我不能多次使用这些函数来映射连续的缓冲区。

Of course, I can implement the fault operation in the vm_operations so that it finds the pfn of the corresponding page in the right buffer, and inserts it into the vma with vm_insert_pfn . 当然,这样它在找到正确的缓冲区中的相应页面的PFN,并将其插入到与VMA我可以实现在vm_operations 故障运行vm_insert_pfn

The acquisition will be really fast, so I can't handle mapping when the real data arrive. 收购将非常快,所以当真实数据到达时我无法处理映射。 But this can be solved easily. 但这很容易解决。 To have all mapping ready before the data acquisition starts, I can simply read the whole mmapped buffer in my application before starting the acquisition, so that all pages are already inserted when the first data arrive. 要在数据采集开始之前准备好所有映射,我可以在开始采集之前简单地读取应用程序中的整个mmapped缓冲区,以便在第一个数据到达时已插入所有页面。

Tha fault based trick should work, but maybe there is something more elegant? 基于故障的技巧应该有效,但也许有更优雅的东西? Just a single function, that may be called multiple times to build the whole mapping incrementally? 只是一个函数,可以多次调用以逐步构建整个映射?

Additional difficulty is that the solution should be applicable (with minimal adjustments) to kernels starting from 2.6.32 to the newest one. 另外的困难是该解决方案应该适用于(从最小的调整)到从2.6.32到最新的内核的内核。

PS. PS。 I have seen that annoying post . 我看过那个恼人的帖子 Is there a danger that if the application attempts to write something to the mmapped buffer (just doing the in place processing of data), my carefully built mapping will be destroyed by COW? 如果应用程序试图向mmapped缓冲区写入内容(只是对数据进行处理),是否有危险,COW会破坏我精心构建的映射?

Below is my solution that works for buffers allocated with dmam_alloc_noncoherent . 下面是我的解决方案,适用于使用dmam_alloc_noncoherent分配的缓冲区。

Allocation of the buffers: 缓冲区的分配:

[...]
for(i=0;i<DMA_NOFBUFS;i++) {
    ext->buf_addr[i] = dmam_alloc_noncoherent(&my_dev->dev, DMA_BUFLEN, &my_dev->buf_dma_t[i],GFP_USER);
    if(my_dev->buf_addr[i] == NULL) {
        res = -ENOMEM;
        goto err1;
    }
    //Make buffer ready for filling by the device
    dma_sync_single_range_for_device(&my_dev->dev, my_dev->buf_dma_t[i],0,DMA_BUFLEN,DMA_FROM_DEVICE);
}
[...]

Mapping of the buffers 缓冲区的映射

void swz_mmap_open(struct vm_area_struct *vma)
{
}

void swz_mmap_close(struct vm_area_struct *vma)
{
}

static int swz_mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
    long offset;
    char * buffer = NULL;
    int buf_num = 0;
    //Calculate the offset (according to info in https://lxr.missinglinkelectronics.com/linux+v2.6.32/drivers/gpu/drm/i915/i915_gem.c#L1195 it is better not ot use the vmf->pgoff )
    offset = (unsigned long)(vmf->virtual_address - vma->vm_start);
    buf_num = offset/DMA_BUFLEN;
    if(buf_num > DMA_NOFBUFS) {
        printk(KERN_ERR "Access outside the buffer\n");
        return -EFAULT;
    }
    offset = offset - buf_num * DMA_BUFLEN;
    buffer = my_dev->buf_addr[buf_num];
    vm_insert_pfn(vma,(unsigned long)(vmf->virtual_address),virt_to_phys(&buffer[offset]) >> PAGE_SHIFT);         
    return VM_FAULT_NOPAGE;
}

struct vm_operations_struct swz_mmap_vm_ops =
{
    .open =     swz_mmap_open,
    .close =    swz_mmap_close,
    .fault =    swz_mmap_fault,    
};

static int char_sgdma_wz_mmap(struct file *file, struct vm_area_struct *vma)
{
    vma->vm_ops = &swz_mmap_vm_ops;
    vma->vm_flags |= VM_IO | VM_RESERVED | VM_CAN_NONLINEAR | VM_PFNMAP;
    swz_mmap_open(vma);
    return 0;
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM