[英]How does System V IPC deal with fragmentation when allocating a large block of memory using “shmget”?
I'm allocating a large block of shared memory using shmget on an embedded system: 我正在嵌入式系统上使用shmget分配一大块共享内存:
shmid = shmget(key, 16777216, IPC_CREAT | 0666)
The system is running uClinux (2.6.28 Linux Kernel) using Slab allocator. 系统正在使用Slab分配器运行uClinux(2.6.28 Linux内核)。 I've got no MMU on the CPU.
我的CPU上没有MMU。
Once in a while, when running the above shmget command, I get a page allocation failure. 偶尔,在运行上面的shmget命令时,页面分配失败。 This also happens when I'm running out of available RAM, but this also happens once in a while when I have plenty of RAA available.
当可用内存不足时也会发生这种情况,但是当我有大量可用的RAA时也会偶尔发生这种情况。
I suspect the culprit is fragmentation, but I'm not quite sure - so my questions is, can this error be caused because the IPC subsystem requires a continuous physical 16Mb segment for this procedure, and cannot find one due to fragmented memory, throwing the allocation failure, or does the issue lie elsewhere? 我怀疑罪魁祸首是碎片,但我不太确定-所以我的问题是,是否可能导致此错误,因为IPC子系统为此过程需要一个连续的16Mb物理段,并且由于内存碎片而无法找到该段,因此抛出分配失败,还是问题出在其他地方?
In a !MMU
system, you do not have virtual memory, so your supposition is correct - a contiguous block of physical memory is required for that mapping. 在
!MMU
系统中,您没有虚拟内存,因此您的假设是正确的-该映射需要连续的物理内存块。
You can alleviate this issue by refactoring your application to use multiple smaller shared memory blocks, and/or first allocating the shared memory as early as possible after boot. 您可以通过将应用程序重构为使用多个较小的共享内存块和/或在引导后尽早分配共享内存来缓解此问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.