简体   繁体   中英

IBM Watson Natural Language Understanding uploading multiple documents for analysis

I have roughly 200 documents that need to have IBM Watson NLU analysis done. Currently, processing is performed one at a time. Will NLU be able preform a batch analysis? What is the correct python code or process to batch load the files and then response results? The end goal is to grab results to analyze which documents are similar in nature. Any direction is greatly appreciated as IBM Support Documentation does not cover batch processing.

NLU can be "manually" adapted to do batch analysis. But the Watson service that provides what you are asking for is Watson Discovery. It allows to create Collections (set of documents) that will be enriched thru an internal NLU function and then queried.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM