I'm having a problem with the system being overloaded. The query below is getting data from 3 tables, 2 of them have more than 10.000 records, and it takes 50 seconds to run.
SELECT DISTINCT
p.prod_name,
p.prod_price,
Sum(dt.vt_qtd) as total_qtd
FROM tdb_products p
LEFT JOIN tdb_sales_temp dt ON p.prod_mp_id = dt.vt_product
LEFT JOIN tdb_sales s ON dt.vt_cupom = s.sl_coupom
WHERE
s.sl_day = $day_link AND
s.sl_mon = $mon_link AND
s.sl_year = $year_link
GROUP BY
p.prod_name
ORDER BY
p.prod_name ASC
Is this normal?
Resolved!
SELECT prod_name, prod_price, SUM(dt.vt_qtd) AS total_qtd
FROM tdb_sales s
JOIN tdb_sales_temp dt
ON dt.vt_cupom = s.sl_coupom
JOIN tdb_products p
ON p.prod_mp_id = dt.vt_product
WHERE (s.sl_day, s.sl_mon, s_sl_year) = ($day_link, $mon_link, $year_link)
GROUP BY
p.prod_name -- but it's better to group by product's PRIMARY KEY
Remove DISTINCT
(it's redundant as you have GROUP BY
and select the grouping field)
Rewrite LEFT JOIN
as INNER JOIN
since you have a filtering condition on a LEFT JOIN
'ed table.
Create indexes:
tdb_sales (sl_year, sl_mon, sl_day, sl_coupom)
tdb_sales_temp (vt_cupom, vt_product)
tdp_product (prod_mp_id) -- it's probably a PRIMARY KEY and you already have it
Short answer is no, that is definitely not an okay length of time. Any common database system should be able to handle multiple 10,000 row tables with sub-second time.
Not knowing the full schema or dbms back end, my recommendations to look at would be:
Indexing - make sure that the columns being used in the joins have proper indexes on them Data Type - if there is a difference in data type on the columns being joined, the dbms will have to perform a conversion for each row connection which could lead to significant performance drain.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.