I am trying to process and store location information of about 30k vehicles in DynamoDB.
I am following following strategy:
Hist table keeps device historical information
Live table keeps live info
Data will be: deviceid (partition key), lat, lon, timestamp, geohash(sortkey)
geo hash helps in searching nearby vehicles
How can I improve on scaling of the Hist and Live table read and write given write is done every 10 seconds?
You can specify the Read and Write throughput of DynamoDB tables upon creation and you can modify throughput later if necessary. In addition, DynamoDB allows bursting beyond these defined throughput limits .
To obtain the full throughput of your tables, use a wide range of Partition Keys so that requests are distributed to many different servers.
DynamoDB now also supports Auto Scaling , so it can automatically scale based upon usage.
For improved scaling of eventually-consistent Reads , you can also use In-Memory Acceleration with DAX .
In situations of bursty writes (where there might be insufficient Write throughput), some AWS users use an Amazon SQS queue to temporarily store data after received a Throttling error, with a backend process that later reads these messages and inserts them into DynamoDB. This allows tables to be provisioned for average throughput rather than peak throughput.
Bottom line: You should be able to avoid most scaling issues by increasing the throughput of your tables. Other techniques (such as those detailed above) can provide even more scale.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.