简体   繁体   中英

Accellera SystemC error with a large number of SC_THREAD

In the context of a SystemC simulation with many SC_THREAD processes (> 32000), I am facing the following error with the Accellera 2.3.1 implementation on an Intel X86 platform running Ubuntu 15.04:

sc_cor_qt.cpp:114: virtual void sc_core::sc_cor_qt::stack_protect(bool) 
Assertion `ret == 0' failed

The default implementation of the SystemC kernel uses user-level threads (also called coroutines) to implement the SystemC processes. The static processes ( SC_THREAD and SC_CTHREAD ) are initialized in the sc_simcontext.cpp line 759 thread_p->prepare_for_simulation() This function will create the user-level thread object and then enable stack protection.

The stack of the user-level thread is allocated in the heap of the SystemC simulation process by the following line cor->m_stack = new char[cor->m_stack_size]

The issue I am facing happens in the stack protection function after the creation, that uses an mprotect system call to make the page just after the stack of the user-level thread (again, being in the heap of the Linux process) non accessible at all ( PROT_NONE ). The error ( ENOMEM ) I have from mprotect says that this page we want to protect has never been mapped into the process or that the kernel was not able allocate some internal structures while running the mprotect call. Unfortunately I am not able to know which of these two errors happens and how to fix it.

Moreover, I can't see where this extra page is allocated in the heap of the Linux process before the mprotect call is made.

Does anyone know what is going and/or what can I do know to further debug this issue ?

The problem is the maximum number of memory mappings allowed by a single process. Each mprotect call leads to a single memory mapping, resulting in a total number of mappings greater than the default limit of my system. To increase this limit, one has to use:

sudo sysctl vm/max_map_count=524240

It sounds like the majority of your storage should be on the heap, but try increasing stack size in case that is limiting allocation of some SC objects.

Check current stacksize: (from csh)

limit (or from sh: ulimit -a)

If not already unlimited, increase with one of these:

limit stacksize 1024m (from sh: ulimit -s 1024000)

limit stacksize unlimited (from sh: ulimit -s unlimited)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM