简体   繁体   中英

How to Trace Mysql Slow-query-log-file Entries Using PHP Profiler

I just started using PHP profiler to find files in php script which caused slow mysql queries. Some suggested that I use xdebug to track it with the timestamp from the slow log-file entry compared to the files in php profiler that was executed at the same timestamp.

I have read the xdebug documentation but can't find an explanation of this problem.

Can anyone give me enlightenment?

I'm using php 7.0, Debian 9.

My slow-query-log-file entries:

# Thread_id: 222244  Schema: user  QC_hit: No
# Query_time: 51.019708  Lock_time: 0.000119  Rows_sent: 1  Rows_examined: 13295012
# Rows_affected: 0
SET timestamp=1559388099;
SELECT (COUNT(*)) AS `count` 
FROM statistics Statistics WHERE (id >= 1 AND ad_type <> 3);

Edit:

It's not about counting rows in a SELECT statement, but it's about how to track application files that cause slow requests to occur.

The most efficient way for counting large tables is that you should be storing the count in a table somewhere else and increasing/decreasing that value when required, that way you're only querying a single cell and a 51second query becomes less than 1 second.

I know it feels like a redundant thing to do but it is the most efficient and optimal way

There are topics around that suggest querying the INFORMATION_SCHEMA but that doesn't help at all given your need for WHERE and everything else is just as inefficient as your issue here.

All you need is the current count, a place to store it, functionality to increase/decrease it and you're good to go

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM