[英]pycorenlp: “CoreNLP request timed out. Your document may be too long”
I'm trying to run pycorenlp on a long text and get an CoreNLP request timed out. Your document may be too long
我正在尝试在长文本上运行pycorenlp并获得CoreNLP request timed out. Your document may be too long
CoreNLP request timed out. Your document may be too long
error message. CoreNLP request timed out. Your document may be too long
错误消息。 How to fix it? 怎么解决? Is there any way to increase Stanford CoreNLP 's timed out? 有没有办法增加斯坦福CoreNLP的超时时间?
I don't want to segment the text into smaller texts. 我不想将文本分成较小的文本。
Here is the code I use: 这是我使用的代码:
'''
From https://github.com/smilli/py-corenlp/blob/master/example.py
'''
from pycorenlp import StanfordCoreNLP
import pprint
if __name__ == '__main__':
nlp = StanfordCoreNLP('http://localhost:9000')
fp = open("long_text.txt")
text = fp.read()
output = nlp.annotate(text, properties={
'annotators': 'tokenize,ssplit,pos,depparse,parse',
'outputFormat': 'json'
})
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(output)
The Stanford Core NLP Server was launched using: Stanford Core NLP Server使用以下方式启动:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer 9000
You can add 'timeout': '50000'
(unit is ms) in the properties
dictionary: 您可以在properties
字典中添加'timeout': '50000'
(单位为ms):
output = nlp.annotate(text, properties={
'timeout': '50000',
'annotators': 'tokenize,ssplit,pos,depparse,parse',
'outputFormat': 'json'
})
Otherwise, you can launch the Stanford Core NLP Server specifying the timeout: 否则,您可以启动指定超时的Stanford Core NLP服务器:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 50000
(The documentation doesn't mention the timeout
parameter, maybe they forgot to add it, it's at least present in stanford-corenlp-full-2015-12-09, aka 3.6.0. , which is the latest public release) ( 文档没有提到timeout
参数,也许他们忘了添加它,它至少存在于stanford-corenlp-full-2015-12-09,又名3.6.0。 ,这是最新的公开发布)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.