简体   繁体   English

在mysql查询中使用LIMIT是否可以使我的应用程序更快,更高效?

[英]Does using LIMIT in mysql queries will make my application faster and efficient?

Assuming i have more than 1 million records in a table. 假设我在一个表中有超过一百万条记录。 Which of the codes below will return results faster and efficient? 以下哪个代码将更快,更有效地返回结果? I used php. 我用过php。

Should I use this? 我应该用这个吗?

$query = mysql_query("select count(*) as count from reversals where seen = 'false'");
while($q = mysql_fetch_array($query))
    {
    $limit = $q['count'];
    }

$query2 = mysql_query("select * from reversals where seen = 'false' limit $limit");
while($q = mysql_fetch_array($query))
    {
    echo $q['amount'];
    }

OR: 要么:

$query = mysql_query("select * from reversal where seen = 'false'");
while($q = mysql_fetch_array($query2))
    {
    echo $q['amount'];
    }

Your first code example counts number of rows then selects all of them (assuming there are no concurrent sessions that modify this table). 您的第一个代码示例计算行数,然后选择所有行(假设没有并发会话修改该表)。

This practically means you select the whole table anyway thus LIMIT doesn't make sense there (and doesn't affect performance as well). 这实际上意味着您还是要选择整个表,因此LIMIT在那儿没有意义(也不会影响性能)。

Wherever you've read it - it's a wrong assumption that adding LIMIT automagically makes your queries faster. 无论您在哪里阅读,都错误地认为添加LIMIT自动提高查询速度。

Mysql performance optimization IS a complicated topic and most often there are no many generic advices that work for everyone and in every case. Mysql性能优化一个复杂的主题,大多数情况下,没有适用于所有人和每种情况的通用建议。

So if you have any real issues with mysql performance - explain what exact issues you have, provide the real database schema, some statistics about data, etc etc 因此,如果您对mysql的性能有任何实际的问题-请说明您有什么确切的问题,提供实际的数据库模式,有关数据的一些统计信息,等等。

Ok to answer your question here is it, I have a table called paymentlog and it has 709231 records in it. 好的,可以在这里回答您的问题,我有一个名为Paymentlog的表,其中有709231条记录。 For the test purpose I made sure that it does not have any index and specially the column used in where condition. 为了测试目的,我确保它没有任何索引,特别是在where条件中使用的列。

I used EXPLAIN and I got something as 我使用EXPLAIN并得到了一些

explain select * from paymentlog where transdate = '2012-12-01' limit 10 ; 

+----+-------------+------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | paymentlog | ALL  | NULL          | NULL | NULL    | NULL | 709231 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+--------+-------------+

You can see its scanning all the rows in the table even through I have added the LIMIT. 即使我添加了LIMIT,也可以看到它扫描了表中的所有行。 So LIMIT does not make your query faster it only reduced the data volume. 因此,LIMIT不会使您的查询更快,它只会减少数据量。

Now If I add an index in the above table for transdate and run the same explain , I get 现在,如果我在上表中为transdate添加一个索引并运行相同的解释,我将得到

+----+-------------+------------+------+--------------------+--------------------+---------+-------+------+-------+
| id | select_type | table      | type | possible_keys      | key                | key_len | ref   | rows | Extra |
+----+-------------+------------+------+--------------------+--------------------+---------+-------+------+-------+
|  1 | SIMPLE      | paymentlog | ref  | plog_transdate_idx | plog_transdate_idx | 3       | const | 1069 |       |
+----+-------------+------------+------+--------------------+--------------------+---------+-------+------+-------+

So the scanned rows now reported is 1069, even though it is not likely that it will scan 1069 rows it will stop when the limit value is found but if we are getting the values with limit say 1000,70 then it will need to do scan till 1069,so it reduced the scanning rows in the table due to index on the column in where condition, better than 709231. 因此,现在报告的已扫描行是1069,即使它不太可能扫描1069行,当找到极限值时它也会停止,但是如果我们得到的极限值为1000,70,那么它将需要进行扫描直到1069年,所以由于条件下列的索引而减少了表中的扫描行,好于709231。

So the conclusion is by using index you can reduced the number rows being scanned, but if you limit the same number of rows will be scanned with at-most index 1069 and without index 709231. 因此,结论是使用索引可以减少要扫描的行数,但是如果您限制了相同的行数,则最多使用索引1069而不使用索引709231。

In most cases, like your example, YES , because mysql will stop scanning for results when the limit is reached. 在大多数情况下,就像您的示例YES一样 ,因为达到限制时mysql将停止扫描结果。

BUT if the generation of the results it's based on something, like ORDER BY, LIMIT helps, but it may not have the expected speed. 但是如果结果的生成是基于某种东西的,例如ORDER BY,LIMIT会有所帮助,但它可能没有预期的速度。

SELECT * FROM table WHERE indexed_field = 2 LIMIT 10; //fast
SELECT * FROM table WHERE indexed_field = 2 ORDER BY noindexfield LIMIT 10; //less faster, uses filesort.

Use mysql's explain to optimize SELECTS. 使用mysql的说明来优化SELECTS。 It gets more complicated when using GROUP BY. 使用GROUP BY时,它变得更加复杂。

yes, 是,

fetching a bunch of rows from db is more faster and better than fetching the whole contents.... 从数据库中获取一堆行比获取全部内容更快,更好。

in PHP there by we can implement Pagination .... PHP中,我们可以实现分页 ....

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM