简体   繁体   中英

Memory Error when train TBL POS Tagger in Python

When I try to train a corpus having 40K sentences, there is no problem. But when I train 86K sentences, I get error like this:

ERROR:root:
Traceback (most recent call last):
  File "CLC_POS_train.py", line 95, in main
    train(sys.argv[10], encoding, flag_tagger, k, percent, eval_flag)
  File "CLC_POS_train.py", line 49, in train
    CLC_POS.process('TBL', train_data, test_data, flag_evaluate[1], flag_dump[1], 'pos_tbl.model' + postfix)
  File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 184, in process
    tagger = CLC_POS.train_tbl(train_data)
  File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 71, in train_tbl
    tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)
  File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 274, in train
    self._init_mappings(test_sents, train_sents)
  File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 341, in _init_mappings
    self._tag_positions[tag].append((sentnum, wordnum))
MemoryError
INFO:root:

I already used Python 3.5 in Windows 64-bit but still get this error. This is the code used for training:

t0 = RegexpTagger(MyRegexp.create_regexp_tagger())
t1 = nltk.UnigramTagger(train_data, backoff=t0)
t2 = nltk.BigramTagger(train_data, backoff=t1)
trainer = brill_trainer.BrillTaggerTrainer(t2, brill.fntbl37())
tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)

This happened because your PC doesn't have enough RAM. When you train your large corpus, it requires a lot of memory. Install more RAM, then you can get it done.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM