Let's say we have a very large table, and we have queries in the form of (this is an example only)
SELECT personID FROM people WHERE birthYear>2010 LIMIT 50
I want to maximize the performance of getting the results of that query, the problem is that the database will parse the whole table to find the tuples that match the condition and then return the first 50. That is a problem if we have a database with millions or billions of tuples.
Is there a way in Java - JDBC or SQL to not parse the whole table and either parse it progressively and get the first 50 that match the condition, or parse the first 1000 rows for example and get all of them that match, and keep fetching more results when the user clicks a "Show More" button?
Thank you for your time.
The problem is not real. Here's an analysis of what might happen:
SELECT personID FROM people WHERE birthYear>1900 LIMIT 50
SELECT personID FROM people WHERE birthYear>2010 LIMIT 50
Case 1: No index on birthYear:
WHERE
clause . This is likely to be the first 50.Case 2: An index starting with birthYear:
It will jump into the middle of the index to the first value with >1900 (or >2010), then grab the next 50 rows (or fewer). For each of those rows, it will reach into the table for personID
.
Case 3: INDEX(birthYear, personID)
:
As with case 2, but it does not need to "reach into the table". This is because personID
is part of the index.
Only in case 1, and only if fewer than 50 rows have >1900 (seems unlikely), will it scan the entire table. Cases 2 and 3 stop promptly at 50.
Two things you have to do in this case-
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.