简体   繁体   中英

Segmentation fault when allocating array on the heap

Suppose I declare a global array on heap with a size that exceed the limit of the heap. Of course the segmentation fault will be thrown. My question is, what happens when we do that? Will the extra integers overwrite some parts in our computer system?

This depends on the operating system you are using (if any).

Systems that provides a process virtual machine abstraction - that is to say any *nix variant, Windows, some RTOSs such as QNX

In these systems, there is a distinction between virtual memory (address space) and committed physical pages. The process gains physical pages when the writes occurs to the associated virtual address space. Thus it is possible to allocate a larger heap-block than there is physical memory on the system, and the heap can grow on demand. The system may use paging to maintain a working set of pages backed by real memory, and write those that can't be accommodated to disk. This is what many people (incorrectly) describe as 'virtual memory'. Notably, iOS, Android and many embedded systems don't have a pager.

The operating system is likely to kill your process if it uses memory abusively - for instance, allocating a huge heap-block and then writing randomly to all of it. An operating system might apply a limit to the virtual address space or number of physical pages a process can have and will terminate the process when this is exceeded.

Overrunning the end of a heap-block is undefined behaviour in C. This may generate an exception - or any other unexpected consequence. It's a moot point whether you have overrun the entire heap as well at this point.

All of these operating systems will prevent trashing of system memory by a process.

Bare-metal systems, some embedded operating systems

These systems lack the process virtual machine abstraction and memory protection that goes with it; they lack paging and will typically not allow an allocation of a larger heap-block than can be accommodated in physical pages. Overwriting the end of an allocated block will have undefined behaviour.

Suppose I declare a global array on heap with a size that exceed the limit of the heap.

You cant declare the the global array on the heap as you do not have access to this memory at the compile time.

Probably you mean the array with the static storage duration and if the size of it will be larger than the memory reserved for the static storage duration objects the linker will throw the error.

heap memory can be only dynamically allocated runtime using the malloc family functions.

int a[500]; // 'a' is a static storage duration object

int foo()
{
   int b[500]; //'b' is an automatic storage duration object (most implementations use stack for it)
   static int c[500]; // 'c' is a static storage duration object
   int *d; //'d' is an automatic storage duration pointer (most implementations use 

   d = malloc(1000); // 'd' references the 1000 byte memory area allocated on the heap
}

if you try to allocate more memory than is available the allocation function will return NULL but not fail. If you try to access the memory which does not belongs to the object - it is an Undefined Behavior and it may potentially result in the segmentation fault.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM