简体   繁体   中英

How do I efficiently store and query a raw JSON stream in MongoDB?

I would like to store the raw JSON stream (either via Twitter or the NYTimes) efficiently in MongoDB, so that I can later index the data (NYTimes articles, or Tweets/usernames) with either Lucene or Hadoop. What's the smartest way of storing data in Mongo? Should I just pipe in the JSON, or is there something better? I am only using a single machine for mongodb, with 3 replica sets.

Is there an efficient (smart) way of writing queries, or storing my data to better-optimize the search-queries?

Is there an efficient (smart) way of writing queries, or storing my data to better-optimize the search-queries?

This totally depends on what kind of queries you need to make and what the usage pattern of your application will be. It would be pretty simple to store each tweet in a Mongo Document containing: sender, timestamp, text, etc. Depending on what queries you need to make, you will need to create indexes on these fields (more info: http://www.mongodb.org/display/DOCS/Indexes )

For full text search, you could tokenize/parse/stem the text of the tweets and store an array of tokens with each tweet which you can index to make queries on it fast. If you need more powerful full text search features, you could also index them with Lucene and store the objectId in each lucene document - but this introduces the complexity of essentially having 2 data stores

Again, there's really no right answer here without knowing the details of the use case.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM