简体   繁体   中英

Large shared memory between Kernel space and user space

I am working on a research project , and I have to share a large data structure between a kernel module and a user space program. The data structure can get very large, and since the application is performance critical, I tried using shared memory to reduce the overhead of serializing the structure(using other interfaces like NetLink). I currently made a test code based on the this link:

[ http://people.ee.ethz.ch/~arkeller/linux/kernel_user_space_howto.html#s8][1]

They are using debugfs . I added the code in the link into my kernel module, and I wrote a custom user space program similar to theirs. I tried it with small sizes of my datastructure which worked perfectly. You can notice in the code, they are sharing only 1 page of memory. I wanted to know if there is an easy way to share much more memory than just one page.

There's not really much different in doing many pages.

Allocate more pages in the open (alloc_pages or a variant), store them in an array, then your fault handler will need to (based on the faulting address):

  • calculate the offset into the area with something like "(((unsigned long) vmf->virtual_address - vma->vm_start) + (vma->vm_pgoff << PAGE_SHIFT))"
  • divide by PAGE_SIZE to calculate page index within the array
  • range check to make sure it's valid
  • pull struct page * from array
  • call get_page to do the mapping

You can continue to use debugfs or, with a small amount of additional work in the module initialization, put a more standard character device frontend on it. (For that, nothing really needs to change outside of the module_init/module_exit parts.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM