[英]Memory Error when train TBL POS Tagger in Python
当我尝试训练一个具有40K句子的语料库时,没有问题。 但是,当我训练86K句子时,出现如下错误:
ERROR:root:
Traceback (most recent call last):
File "CLC_POS_train.py", line 95, in main
train(sys.argv[10], encoding, flag_tagger, k, percent, eval_flag)
File "CLC_POS_train.py", line 49, in train
CLC_POS.process('TBL', train_data, test_data, flag_evaluate[1], flag_dump[1], 'pos_tbl.model' + postfix)
File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 184, in process
tagger = CLC_POS.train_tbl(train_data)
File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 71, in train_tbl
tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)
File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 274, in train
self._init_mappings(test_sents, train_sents)
File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 341, in _init_mappings
self._tag_positions[tag].append((sentnum, wordnum))
MemoryError
INFO:root:
我已经在Windows 64位中使用了Python 3.5,但仍然收到此错误。 这是用于训练的代码:
t0 = RegexpTagger(MyRegexp.create_regexp_tagger())
t1 = nltk.UnigramTagger(train_data, backoff=t0)
t2 = nltk.BigramTagger(train_data, backoff=t1)
trainer = brill_trainer.BrillTaggerTrainer(t2, brill.fntbl37())
tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)
发生这种情况是因为您的PC没有足够的RAM。 训练大型语料库时,它需要大量内存。 安装更多的RAM,然后就可以完成它。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.