簡體   English   中英

Boehm GC ++垃圾收集器:堆段過多增加MAXHINCR或MAX_HEAP_SECTS

[英]Boehm GC++ garbage collector : Too many heap sections Increase MAXHINCR or MAX_HEAP_SECTS

我在應用程序中使用Boehm C ++垃圾收集器。 該應用程序使用Levenshtein確定性有限自動機Python程序來計算兩個字符串之間的Levenshtein距離。 我已經使用gcc 4.1.2將Python程序移植到Centos Linux版本上的C ++。

最近,我注意到在運行應用程序10分鍾以上之后,我收到SIGABRT錯誤消息: Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS 我想知道是否有人知道如何解決或解決此問題。

這是我的gdb堆棧跟蹤。 謝謝。

  Program received signal SIGABRT, Aborted.
(gdb) bt
#0  0x002ed402 in __kernel_vsyscall ()
#1  0x00b1bdf0 in raise () from /lib/libc.so.6
#2  0x00b1d701 in abort () from /lib/libc.so.6
#3  0x00e28db4 in GC_abort (msg=0xf36de0 "Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS")
    at ../Source/misc.c:1079
#4  0x00e249a0 in GC_add_to_heap (p=0xb7cb7000, bytes=65536) at ../Source/alloc.c:812
#5  0x00e24e45 in GC_expand_hp_inner (n=16) at ../Source/alloc.c:966
#6  0x00e24fc5 in GC_collect_or_expand (needed_blocks=1, ignore_off_page=0) at ../Source/alloc.c:1032
#7  0x00e2519a in GC_allocobj (sz=6, kind=1) at ../Source/alloc.c:1087
#8  0x00e31e90 in GC_generic_malloc_inner (lb=20, k=1) at ../Source/malloc.c:138
#9  0x00e31fde in GC_generic_malloc (lb=20, k=1) at ../Source/malloc.c:194
#10 0x00e322b8 in GC_malloc (lb=20) at ../Source/malloc.c:319
#11 0x00df5ab5 in gc::operator new (size=20) at ../Include/gc_cpp.h:275
#12 0x00de7cb7 in __automata_combined_test2__::DFA::levenshtein_automata (this=0xb7b49080, term=0xb7cb5d20, k=1) 
at ../Source/automata_combined_test2.cpp:199
#13 0x00e3a085 in cDedupe::AccurateNearCompare (this=0x8052cd8, 
    Str1_=0x81f1a1d "GEMMA     OSTRANDER GEM 10   
DICARLO", ' ' <repeats 13 times>, "01748SUE       WOLFE     SUE 268  POND", ' ' <repeats 16 times>, 
"01748REGINA    SHAKIN    REGI16   JAMIE", ' ' <repeats 15 times>, "01748KATHLEEN  MAZUR     CATH10   JAMIE    "
..., 
    Str2_=0x81f2917 "LINDA     ROBISON   LIN 53   CHESTNUT", ' ' <repeats 12 times>, 
"01748MICHELLE  LITAVIS   MICH15   BLUEBERRY", ' ' <repeats 11 times>, "01748JOAN      TITUS     JO  6    SMITH", 
' ' <repeats 15 times>, "01748MELINDA   MCDOWELL  MEL 24   SMITH    "..., Size_=10, 

更新:

我查看了Boehm垃圾收集器的源文件和頭文件,並意識到: Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS錯誤消息可以通過在GNUmakefile的CFLAGS部分中添加‑DLARGE_CONFIG來解決。

我測試了對GNUmakfile的更改,發現Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS不再出現Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS錯誤消息。 但是我遇到了一個新的分段錯誤(核心轉儲)。 使用gdb,我發現GDB分段錯誤發生在第20行的以下函數中(我已經注釋了):

set<tuple2<__ss_int, __ss_int> *> *NFA::next_state(set<tuple2<__ss_int, __ss_int> *> *states, str *input) {
    tuple2<__ss_int, __ss_int> *state;
    set<tuple2<__ss_int, __ss_int> *>::for_in_loop __3;
    set<tuple2<__ss_int, __ss_int> *> *__0, *dest_states;
    dict<str *, set<tuple2<__ss_int, __ss_int> *> *> *state_transitions;
    __iter<tuple2<__ss_int, __ss_int> *> *__1;
    __ss_int __2;

    dest_states = (new set<tuple2<__ss_int, __ss_int> *>());

    FOR_IN_NEW(state,states,0,2,3)
        state_transitions = (this->transitions)->get(state, ((dict<str *, set<tuple2<__ss_int, __ss_int> *> *> *)((new dict<void *, void *>()))));

    dest_states->update(state_transitions->get(input, new set<tuple2<__ss_int, __ss_int> *>()));
    dest_states->update(state_transitions->get(NFA::ANY, new set<tuple2<__ss_int, __ss_int> *>()));
    END_FOR

    return (new set<tuple2<__ss_int, __ss_int> *>(this->_expand(dest_states),1));//line20  
}

我想知道是否可以修改此功能以修復分段錯誤。 謝謝。

我終於想出了解決GC內存不足分段錯誤的方法。 我在python程序中替換了setdefault和get構造。 例如,我將self.transitions.setdefault(src,{})。setdefault(input,set())。add(dest)python語句替換為:

 if src not in self.transitions:
    self.transitions[src] = {}
 result = self.transitions[src]
 if input not in result:
    result[input] = set()
 result[input].add(dest)

另外,我替換了python語句:

new_states = self.transitions.get(state, {}).get(NFA.EPSILON, set()).difference(states)

與:

        if state not in self.transitions:
           self.transitions[state] = {}
        result = self.transitions[state]    
        if NFA.EPSILON not in result:
           result[NFA.EPSILON] = set()
        cook = result[NFA.EPSILON]      
        new_states = cook.difference(states) 

最后,我確保將__shedkin__.init()放在for或while循環之外。 __shedskin__.init()調用GC分配器。 所有這些更改的目的是減輕對GC分配器的壓力。

我已經通過對GC分配器的300萬次調用測試了這些更改,但尚未遇到分段錯誤。 謝謝。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM