[英]IBM Watson Natural Language Understanding uploading multiple documents for analysis
I have roughly 200 documents that need to have IBM Watson NLU analysis done. 我大约有200个文档需要完成IBM Watson NLU分析。 Currently, processing is performed one at a time.
当前,一次执行一次处理。 Will NLU be able preform a batch analysis?
NLU可以进行批量分析吗? What is the correct python code or process to batch load the files and then response results?
批处理加载文件然后响应结果的正确python代码或过程是什么? The end goal is to grab results to analyze which documents are similar in nature.
最终目标是获取结果以分析哪些文档本质上相似。 Any direction is greatly appreciated as IBM Support Documentation does not cover batch processing.
非常感谢任何指导,因为IBM支持文档未涵盖批处理。
NLU can be "manually" adapted to do batch analysis. NLU可以“手动”进行批处理分析。 But the Watson service that provides what you are asking for is Watson Discovery.
但是提供您所需要的Watson服务是Watson Discovery。 It allows to create Collections (set of documents) that will be enriched thru an internal NLU function and then queried.
它允许创建集合(文档集),这些集合将通过内部NLU功能得到充实,然后被查询。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.