简体   繁体   English

在 allennlp 中使用 Transformer QA 预训练 model 进行阅读理解的通过限制

[英]Passage limit for Reading comprehension by using Transformer QA pretrained model in allennlp

What is the max passage limit or hardware limit to use transformer-qa model for reading comprehension in allennlp:在 allennlp 中使用 transformer-qa model 进行阅读理解的最大通过限制或硬件限制是多少:

Predictor.from_path('https://storage.googleapis.com/allennlp-public-models/transformer-qa-2020-10-03.tar.gz').predict(passage=passage, question=question) Predictor.from_path('https://storage.googleapis.com/allennlp-public-models/transformer-qa-2020-10-03.tar.gz').predict(passage=passage, question=question)

I'm getting "DefaultCPUAllocator: not enough memory: you tried to allocate 23437770752 bytes. Buy new RAM!"我收到“DefaultCPUAllocator:没有足够的 memory:您试图分配 23437770752 字节。购买新的 RAM!” error错误

I don't think that error message comes from AllenNLP.我不认为该错误消息来自 AllenNLP。 What are you running when you get it?当你得到它时,你在跑什么?

That number represents 22GB, which is too much for the TransformerQA model, unless you are sending a really large sequence.这个数字代表 22GB,这对于 TransformerQA model 来说太大了,除非您要发送一个非常大的序列。 Generally, TransformerQA can only do 512 tokens at a time.一般来说,TransformerQA 一次只能做 512 个令牌。 If your text has more than 512 tokens, it will break it up into multiple sequences of length 512 each.如果您的文本有超过 512 个标记,它将把它分成多个长度为 512 的序列。 The only limit to how many of these 512-length sequences it creates is the size of your memory and your patience.它创建的这些 512 长度序列的数量的唯一限制是您的 memory 的大小和您的耐心。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM