[英]How to increase speed of this ner model implemented from scratch using 1 million labeled sentences
我想使用spacy的NER模型从头开始使用一百万个句子训练模型。 该模型只有两种类型的实体。 这是我正在使用的代码。 由于无法共享数据,因此创建了一个虚拟数据集。
我的主要问题是该模型的训练时间太长。 如果您可以在我的代码中突出显示任何错误或建议其他方法来加快培训速度,我将不胜感激。
TRAIN_DATA = [ ('Ich bin in Bremen', {'entities': [(11, 17, 'loc')]})] * 1000000
import spacy
import random
from spacy.util import minibatch, compounding
def train_spacy(data,iterations):
TRAIN_DATA = data
nlp = spacy.blank('de')
if 'ner' not in nlp.pipe_names:
ner = nlp.create_pipe('ner')
nlp.add_pipe(ner, last=True)
# add labels
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes):
optimizer = nlp.begin_training()
for itn in range(iterations):
print("Statring iteration " + str(itn))
random.shuffle(TRAIN_DATA)
losses = {}
batches = minibatch(TRAIN_DATA, size=compounding(100, 64.0, 1.001))
for batch in batches:
texts, annotations = zip(*batch)
nlp.update(texts, annotations, sgd=optimizer, drop=0.35, losses=losses)
print("Losses", losses)
return nlp
model = train_spacy(TRAIN_DATA, 20)
也许您可以尝试以下方法:
batches = minibatch(TRAIN_DATA, size=compounding(1, 512, 1.001))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.