[英]Boehm GC++ garbage collector : Too many heap sections Increase MAXHINCR or MAX_HEAP_SECTS
我在应用程序中使用Boehm C ++垃圾收集器。 该应用程序使用Levenshtein确定性有限自动机Python程序来计算两个字符串之间的Levenshtein距离。 我已经使用gcc 4.1.2将Python程序移植到Centos Linux版本上的C ++。
最近,我注意到在运行应用程序10分钟以上之后,我收到SIGABRT错误消息: Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS
。 我想知道是否有人知道如何解决或解决此问题。
这是我的gdb堆栈跟踪。 谢谢。
Program received signal SIGABRT, Aborted.
(gdb) bt
#0 0x002ed402 in __kernel_vsyscall ()
#1 0x00b1bdf0 in raise () from /lib/libc.so.6
#2 0x00b1d701 in abort () from /lib/libc.so.6
#3 0x00e28db4 in GC_abort (msg=0xf36de0 "Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS")
at ../Source/misc.c:1079
#4 0x00e249a0 in GC_add_to_heap (p=0xb7cb7000, bytes=65536) at ../Source/alloc.c:812
#5 0x00e24e45 in GC_expand_hp_inner (n=16) at ../Source/alloc.c:966
#6 0x00e24fc5 in GC_collect_or_expand (needed_blocks=1, ignore_off_page=0) at ../Source/alloc.c:1032
#7 0x00e2519a in GC_allocobj (sz=6, kind=1) at ../Source/alloc.c:1087
#8 0x00e31e90 in GC_generic_malloc_inner (lb=20, k=1) at ../Source/malloc.c:138
#9 0x00e31fde in GC_generic_malloc (lb=20, k=1) at ../Source/malloc.c:194
#10 0x00e322b8 in GC_malloc (lb=20) at ../Source/malloc.c:319
#11 0x00df5ab5 in gc::operator new (size=20) at ../Include/gc_cpp.h:275
#12 0x00de7cb7 in __automata_combined_test2__::DFA::levenshtein_automata (this=0xb7b49080, term=0xb7cb5d20, k=1)
at ../Source/automata_combined_test2.cpp:199
#13 0x00e3a085 in cDedupe::AccurateNearCompare (this=0x8052cd8,
Str1_=0x81f1a1d "GEMMA OSTRANDER GEM 10
DICARLO", ' ' <repeats 13 times>, "01748SUE WOLFE SUE 268 POND", ' ' <repeats 16 times>,
"01748REGINA SHAKIN REGI16 JAMIE", ' ' <repeats 15 times>, "01748KATHLEEN MAZUR CATH10 JAMIE "
...,
Str2_=0x81f2917 "LINDA ROBISON LIN 53 CHESTNUT", ' ' <repeats 12 times>,
"01748MICHELLE LITAVIS MICH15 BLUEBERRY", ' ' <repeats 11 times>, "01748JOAN TITUS JO 6 SMITH",
' ' <repeats 15 times>, "01748MELINDA MCDOWELL MEL 24 SMITH "..., Size_=10,
更新:
我查看了Boehm垃圾收集器的源文件和头文件,并意识到: Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS
错误消息可以通过在GNUmakefile的CFLAGS部分中添加‑DLARGE_CONFIG来解决。
我测试了对GNUmakfile的更改,发现Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS
不再出现Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS
错误消息。 但是我遇到了一个新的分段错误(核心转储)。 使用gdb,我发现GDB分段错误发生在第20行的以下函数中(我已经注释了):
set<tuple2<__ss_int, __ss_int> *> *NFA::next_state(set<tuple2<__ss_int, __ss_int> *> *states, str *input) {
tuple2<__ss_int, __ss_int> *state;
set<tuple2<__ss_int, __ss_int> *>::for_in_loop __3;
set<tuple2<__ss_int, __ss_int> *> *__0, *dest_states;
dict<str *, set<tuple2<__ss_int, __ss_int> *> *> *state_transitions;
__iter<tuple2<__ss_int, __ss_int> *> *__1;
__ss_int __2;
dest_states = (new set<tuple2<__ss_int, __ss_int> *>());
FOR_IN_NEW(state,states,0,2,3)
state_transitions = (this->transitions)->get(state, ((dict<str *, set<tuple2<__ss_int, __ss_int> *> *> *)((new dict<void *, void *>()))));
dest_states->update(state_transitions->get(input, new set<tuple2<__ss_int, __ss_int> *>()));
dest_states->update(state_transitions->get(NFA::ANY, new set<tuple2<__ss_int, __ss_int> *>()));
END_FOR
return (new set<tuple2<__ss_int, __ss_int> *>(this->_expand(dest_states),1));//line20
}
我想知道是否可以修改此功能以修复分段错误。 谢谢。
我终于想出了解决GC内存不足分段错误的方法。 我在python程序中替换了setdefault和get构造。 例如,我将self.transitions.setdefault(src,{})。setdefault(input,set())。add(dest)python语句替换为:
if src not in self.transitions:
self.transitions[src] = {}
result = self.transitions[src]
if input not in result:
result[input] = set()
result[input].add(dest)
另外,我替换了python语句:
new_states = self.transitions.get(state, {}).get(NFA.EPSILON, set()).difference(states)
与:
if state not in self.transitions:
self.transitions[state] = {}
result = self.transitions[state]
if NFA.EPSILON not in result:
result[NFA.EPSILON] = set()
cook = result[NFA.EPSILON]
new_states = cook.difference(states)
最后,我确保将__shedkin__.init()
放在for或while循环之外。 __shedskin__.init()
调用GC分配器。 所有这些更改的目的是减轻对GC分配器的压力。
我已经通过对GC分配器的300万次调用测试了这些更改,但尚未遇到分段错误。 谢谢。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.