We have a table in an application, which is VASTLY HUGE. Easily millions of rows.
The structure is basically as follows:
SERIES_ID | YEAR | DAY_SINCE_EPOCH | HOUR | MINUTE | VALUE
We have indexes on YEAR and DAY_SINCE_EPOCH.
The problem is that certain kind of queries are very slow, such as:
SELECT
...
WHERE
SERIES_ID = 3 AND
DAY_SINCE_EPOCH < 16000 AND
YEAR = 2012
ORDER BY
DAY_SINCE_EPOCH DESC,
HOUR DESC,
MINUTE DESC
LIMIT 1
This takes about 10 seconds in a table with 2M rows, and well over 20 seconds in a table with 18M rows.
The intent is to find the last record of series 3, before day 16000. The YEAR=2012 is there to speed up the lookup.
So I was wondering, do we have the indexes set up right? Perhaps it'd be faster without the year index? Or with added SERIES_ID index or something like that?
Or just restructuring the query would help?
Any idea how to speed up the search will be welcome!
该索引可能会有所帮助:
> create index on your_table(series_id, day_since_epoch, year);
Check these conditions
SERIES_ID = 3 AND
DAY_SINCE_EPOCH < 16000 AND
YEAR = 2012
when you are updating the table(eg. saving new records) then save the true/false info on a "bit" column then check for it just like
where checked=1 .....
SERIES_ID | YEAR | DAY_SINCE_EPOCH | HOUR | MINUTE | VALUE | checked
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.