简体   繁体   中英

Does SQLite feature consolidation functions for multiple entries?

I am planning to use an SQLite Database on an embedded Linux Computer (Raspberry Pi or similar specs) to store Sensor-Data of about 16 floats for a period of one to two years.

The Data will be accessed through a web interface which is being served from the embedded board as well. The purpose is to visualize the data with graphs, etc.

Let's say the User want's to view the data for a whole years inside a graph. In order to not flood the Client Browser with millions of data it makes sense to consolidate that data before it goes up to the browser. For example one year will be described with average values for each week of the year.

Does SQLite feature such data aggregation commands, like averaging huge amounts of entries for a single table (averaging, summing)?

Is this operation performant on an embedded computer which specs are similar to those of the famous Raspberry Pi?

Does these operations lock up the database? So new entries will have to wait before they can enter the database?

Simple answer is 'Yes'

https://www.sqlite.org/lang_aggfunc.html

But you may want to consider that there are many factors that contribute to the speed of a query, not least of which is scheme/data model design as well as the index's on the tables used.

See https://www.sqlite.org/queryplanner.html for discussion on how queries are executed.

You have 3 options for this:

1) Pre Calculate the data when generated: Whenever you trap new sensor data, do the updates to your aggregates. Down side is limited flexibility to user on being able to change parameters, they get a set list of aggregates and time periods, and that's it.

2) Send the data to a central more powerful server and get the client to login and use the horse power of the central server to do the aggregates. Down side is the sensor collectors will need to be connected to central server and there will be scale issues as all data for all clients is calculated centrally. More clients, more horsepower needed. There are many server side scaling paradigms so this is more a cost constraint than a technical one.

3) Send raw data to client and let the client machine handle aggregation. Down side is data transmission if you are talking about millions of records. However, with client side db engines, like Google's lovefield, this is the best future proof architecture option in my opinion as it allows you to give significant power to the user via client side libraries and to use the client's machine resources. You could also look at using a mixed lower level pre aggregation model where some data is pre aggregated on the server before being sent to client to reduce the data size.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM