简体   繁体   中英

Text mining using Solr and Hadoop

I have a Solr database containing about 100m documents. I would like to text-mine these documents.

I'm thinking of making text-mining modules in javacode. And then run the jar's on a Hadoop cluster. (The output of the modules can be stored in solr.)

I'm new to Hadoop and Solr. And I would like to know, is this possible? And/Or is there a better way to text-mine the documents?

Any idea's regarding this situation, would really help me a lot.

Do you need access documents frequently?

You can use SolrCloud if you need to access big documents. Sharding and replicas structures can service high loading.

And json/xml stored to Solr are easily.

Check the Mahout library before you go with a completely custom code; it has a Lucene driver, and it is integrated with Hadoop for most of the purposes. Mostly, you need terms vectors in order to do mining with Mahout. Once you have it - it's a rather seamless setup.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM