im doing a migration of a data base from oracle to postgresql and i have got a performance problem with and update query.
explain update airepp.EQP_CALC_STAT_EVENEMENT e set EVT_COM_POPULATION=(
SELECT f.COM_RECENSEMENT_DER_POPULATION
FROM airepp.EQP_FOURNISSEUR f where e.EVT_LIEU_LOC_CODE = f.COM_CODE)
;
in oracle this query takes about 5 minutes and in postgresql it takes 55 minutes, they have the same indexes with the exact fields, this is the explain of postgresql and oracle of this query
Oracle plan:
I have tried this but it was worse in 66 minutes
explain update airepp.EQP_CALC_STAT_EVENEMENT e
set EVT_COM_POPULATION = f.COM_RECENSEMENT_DER_POPULATION
FROM airepp.EQP_FOURNISSEUR f
where e.EVT_LIEU_LOC_CODE = f.COM_CODE;
is there any other way to write this query in a more optimized way to have the same results of oracle or near?
You may face this problem because of large amount of lookups. For reference table with cardinality much smaller than main table you can try to use update... from...
which can leverage hash join.
Here is the example:
create table t as select v::text as some_code, random() as some_value from generate_series(1, 20000) as v create unique index t_idx on t(some_code) create table t_big as select v as id, trunc(v / 10)::text as some_code, null as some_value from generate_series(1, 100000) v
explain analyze update t_big set some_value = t.some_value from t where t_big.some_code = t.some_code
| QUERY PLAN | |:---------------------------------------------------------------------------------------------------------------------- | | Update on t_big (cost=559.00..1885.53 rows=45135 width=80) (actual time=3908.306..3908.309 rows=0 loops=1) | | -> Hash Join (cost=559.00..1885.53 rows=45135 width=80) (actual time=7.715..1709.967 rows=99991 loops=1) | | Hash Cond: (t_big.some_code = t.some_code) | | -> Seq Scan on t_big (cost=0.00..982.35 rows=45135 width=42) (actual time=0.048..390.277 rows=100000 loops=1) | | -> Hash (cost=309.00..309.00 rows=20000 width=46) (actual time=7.564..7.565 rows=20000 loops=1) | | Buckets: 32768 Batches: 1 Memory Usage: 1261kB | | -> Seq Scan on t (cost=0.00..309.00 rows=20000 width=46) (actual time=0.017..3.439 rows=20000 loops=1) | | Planning Time: 0.678 ms | | Execution Time: 3908.437 ms |
explain analyze update t_big set some_value = ( select some_value from t where t.some_code = t_big.some_code )
| QUERY PLAN | |:------------------------------------------------------------------------------------------------------------------------- | | Update on t_big (cost=0.00..1054578.00 rows=126600 width=46) (actual time=7759.679..7759.680 rows=0 loops=1) | | -> Seq Scan on t_big (cost=0.00..1054578.00 rows=126600 width=46) (actual time=0.086..5146.080 rows=100000 loops=1) | | SubPlan 1 | | -> Index Scan using t_idx on t (cost=0.29..8.30 rows=1 width=8) (actual time=0.027..0.028 rows=1 loops=100000) | | Index Cond: (some_code = t_big.some_code) | | Planning Time: 0.217 ms | | Execution Time: 7759.737 ms |
db<>fiddle here
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.