简体   繁体   中英

Efficient NoSQL data model for large collections

Scenario:

There are 1,000,000 coordinates (pixels) in an element.

A coordinate consists of x: number, y: number

Users select a single coordinate to 'activate'

Objective to User:

Show a count of unique coordinates activated.

ie 247,456 out of 1,000,000 coordinates have been activated

DB Objectives:

How should such a large data set be modeled in NoSQL?

Approach #1 Pre-populate a collection with the possible coordinates and remove them / increment a counter as the coordinates are activated

Searching through the list would be expensive, but only improve as more coordinates are activated.

Approach #2 Have a growing collection of documents as the coordinates are activated and increment.

Reading/writing would become increasingly expensive without an efficient architecture.

使用第二种方法,只需花时间围绕它构建一个良好的架构,因为这将非常有用并进一步降低文档的读取成本以删除第一种方法中的坐标

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM