简体   繁体   English

大的物理连续存储区

[英]Large physically contiguous memory area

For my M.Sc. 对于我的硕士 thesis, I have to reverse-engineer the hash function Intel uses inside its CPUs to spread data among Last Level Cache slices in Sandy Bridge and newer generations. 论文中,我必须对Intel内部在CPU中使用的哈希函数进行反向工程,以在Sandy Bridge和更新一代中的Last Level Cache片之间分配数据。 To this aim, I am developing an application in Linux, which needs a physically contiguous memory area in order to make my tests. 为此,我正在Linux中开发一个应用程序,该应用程序需要一个物理上连续的内存区域才能进行测试。 The idea is to read data from this area, so that they are cached, probe if older data have been evicted (through delay measures or LLC miss counters) in order to find colliding memory addresses and finally discover the hash function by comparing these colliding addresses. 想法是从该区域读取数据,以便对其进行缓存,探查是否已清除较旧的数据(通过延迟措施或LLC未命中计数器),以查找冲突的内存地址,并最终通过比较这些冲突的地址来发现哈希函数。 The same procedure has already been used in Windows by a researcher, and proved to work. 研究人员已经在Windows中使用了相同的步骤,并且证明该步骤有效。

To do this, I need to allocate an area that must be large (64 MB or more) and fully cachable, so without DMA-friendly options in TLB. 为此,我需要分配一个必须大(64 MB或更大)且完全可缓存的区域,因此在TLB中没有DMA友好选项。 How can I perform this allocation? 如何执行此分配?

To have a full control over the allocation (ie, for it to be really physically contiguous), my idea was to write a Linux module, export a device and mmap() it from userspace, but I do not know how to allocate so much contiguous memory inside the kernel. 为了完全控制分配(即,它实际上在物理上是连续的),我的想法是编写一个Linux模块,从用户空间中导出设备并mmap(),但是我不知道如何分配这么多内核中的连续内存。 I heard about Linux Contiguous Memory Allocator (CMA), but I don't know how it works 我听说过Linux连续内存分配器(CMA),但是我不知道它是如何工作的

Applications don't see physical memory, a process have some address space in virtual memory . 应用程序看不到物理内存, 进程虚拟内存 有一些地址空间 Read about the MMU (what is contiguous in virtual space might not really be physically contiguous and vice versa) 阅读有关MMU的信息 (在虚拟空间中什么是连续的,在物理上可能并不是真正连续的,反之亦然)

You might perhaps want to lock some memory using mlock(2) 您可能想使用mlock(2)锁定一些内存

But your application will be scheduled, and other processes (or scheduled tasks) would dirty your CPU cache . 但是您的应用程序将被调度,并且其他进程(或调度的任务)将弄脏您的CPU缓存 See also sched_setaffinity(2) 另请参见sched_setaffinity(2)

(and even kernel code might be perhaps preempted) (甚至可能抢占了内核代码)

This page on Kernel Newbies , has some ideas about memory allocation. 有关内核新手的页面,有关内存分配的一些想法。 But the max for get_free_pages looks like 8MiB. 但是get_free_pages的最大值看起来像8MiB。 (Perhaps that's a compile-time constraint?) (也许这是编译时的约束?)

Since this would be all-custom, you could explore the mem= boot parameter of the linux kernel. 由于这是完全自定义的,因此您可以探索linux内核的mem= boot参数。 This will limit the amount of memory used, and you can party all over the remaining memory without anyone knowing. 这将限制使用的内存量,并且您可以在没有任何人知道的情况下共享剩余的所有内存。 Heck, if you boot up a busybox system, you could probably do mem=32M , but even mem=256M should work if you're not booting a GUI. 哎呀,如果您启动一个busybox系统,您可能可以执行mem=32M ,但是如果您不启动GUI,那么即使mem=256M应该可以工作。

You will also want to look into the Offline Scheduler (and here ). 您还将需要查看Offline Scheduler (和此处 )。 It "unplugs" the CPU from Linux so you can have full control over ALL code running on it. 它可以从Linux中“拔出” CPU,因此您可以完全控制在其上运行的所有代码。 (Some parts of this are already in the mainline kernel, and maybe all of it is.) (其中的某些部分已经在主线内核中,也许全部都在。)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM