简体   繁体   English

SQL PostgreSQL 中的 JOIN - WHERE 子句中的执行计划与 ON 子句中的不同

[英]SQL JOIN in PostgreSQL - Different execution plan in WHERE clause than in ON clause

We have a simple statement in PostgreSQL 11.9/11.10 or 12.5 where we can write the join with a WHERE-CLAUSE or with a ON-CLAUSE.我们在 PostgreSQL 11.9/11.10 或 12.5 中有一个简单的语句,我们可以使用 WHERE-CLAUSE 或 ON-CLAUSE 编写连接。 The meaning is exactly the same and therefore the number of returned rows too - But we receive a different explain plan.含义完全相同,因此返回的行数也是如此 - 但我们收到了不同的解释计划。 With more data inside the tables one execution plan is getting really bad and we want to understand why PostgreSQL chooses different explain plans for this situation.随着表中的数据越来越多,一个执行计划变得非常糟糕,我们想了解为什么 PostgreSQL 会针对这种情况选择不同的解释计划。 Any ideas?有任何想法吗?

Let's create some sample data:让我们创建一些示例数据:

CREATE TABLE t1 (
    t1_nr int8 NOT NULL,
    name varchar(60),
    CONSTRAINT t1_pk PRIMARY KEY (t1_nr)
);

INSERT INTO t1 (t1_nr, name) SELECT s, left(md5(random()::text), 10) FROM generate_series(1, 1000000) s; -- 1 million records

CREATE TABLE t2 (
    t2_nr int8 NOT NULL,
    CONSTRAINT t2_pk PRIMARY KEY (t2_nr)
);

INSERT INTO t2 (t2_nr) SELECT s FROM generate_series(1, 10000000) s; -- 10 million records

CREATE TABLE t3 (
    t1_nr int8 NOT NULL,
    t2_nr int8 NOT NULL,
    CONSTRAINT t3_pk PRIMARY KEY (t2_nr, t1_nr)
);

INSERT INTO t3 (t1_nr, t2_nr) SELECT (s-1)/10+1, s FROM generate_series(1, 10000000) s; -- 10 t2 records per t1 records --> 10 million records

Our Statement with fully analyzed statistics:我们的声明与全面分析的统计数据:

EXPLAIN (BUFFERS, ANALYZE)
SELECT t1.*
FROM t1 t1
WHERE EXISTS (
    SELECT 1
    FROM t3 t3
    JOIN t2 t2 ON t2.t2_nr = t3.t2_nr
    --AND t3.t1_nr = t1.t1_nr /* GOOD (using ON-CLAUSE) */
    WHERE t3.t1_nr = t1.t1_nr /* BAD (using WHERE-CLAUSE) */
)
LIMIT 1000

The explain plan with the "GOOD" row (ON-CLAUSE):带有“GOOD”行的解释计划(ON-CLAUSE):

QUERY PLAN                                                                                                                            |
--------------------------------------------------------------------------------------------------------------------------------------|
Limit  (cost=0.00..22896.86 rows=1000 width=19) (actual time=0.028..4.801 rows=1000 loops=1)                                          |
  Buffers: shared hit=8015                                                                                                            |
  ->  Seq Scan on t1  (cost=0.00..11448428.92 rows=500000 width=19) (actual time=0.027..4.725 rows=1000 loops=1)                      |
        Filter: (SubPlan 1)                                                                                                           |
        Buffers: shared hit=8015                                                                                                      |
        SubPlan 1                                                                                                                     |
          ->  Nested Loop  (cost=0.87..180.43 rows=17 width=0) (actual time=0.004..0.004 rows=1 loops=1000)                           |
                Buffers: shared hit=8008                                                                                              |
                ->  Index Only Scan using t3_pk on t3  (cost=0.43..36.73 rows=17 width=8) (actual time=0.002..0.002 rows=1 loops=1000)|
                      Index Cond: (t1_nr = t1.t1_nr)                                                                                  |
                      Heap Fetches: 1000                                                                                              |
                      Buffers: shared hit=4003                                                                                        |
                ->  Index Only Scan using t2_pk on t2  (cost=0.43..8.45 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=1000)  |
                      Index Cond: (t2_nr = t3.t2_nr)                                                                                  |
                      Heap Fetches: 1000                                                                                              |
                      Buffers: shared hit=4005                                                                                        |
Planning Time: 0.267 ms                                                                                                               |
Execution Time: 4.880 ms                                                                                                              |

The explain plan with the "BAD" row (WHERE-CLAUSE):带有“BAD”行的解释计划(WHERE-CLAUSE):

QUERY PLAN                                                                                                                                                   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------|
Limit  (cost=1166.26..7343.42 rows=1000 width=19) (actual time=16.888..75.809 rows=1000 loops=1)                                                             |
  Buffers: shared hit=51883 read=11 dirtied=2                                                                                                                |
  ->  Merge Semi Join  (cost=1166.26..3690609.61 rows=597272 width=19) (actual time=16.887..75.703 rows=1000 loops=1)                                        |
        Merge Cond: (t1.t1_nr = t3.t1_nr)                                                                                                                    |
        Buffers: shared hit=51883 read=11 dirtied=2                                                                                                          |
        ->  Index Scan using t1_pk on t1  (cost=0.42..32353.42 rows=1000000 width=19) (actual time=0.010..0.271 rows=1000 loops=1)                           |
              Buffers: shared hit=12                                                                                                                         |
        ->  Gather Merge  (cost=1000.89..3530760.13 rows=9999860 width=8) (actual time=16.873..74.064 rows=9991 loops=1)                                     |
              Workers Planned: 2                                                                                                                             |
              Workers Launched: 2                                                                                                                            |
              Buffers: shared hit=51871 read=11 dirtied=2                                                                                                    |
              ->  Nested Loop  (cost=0.87..2375528.14 rows=4166608 width=8) (actual time=0.054..14.275 rows=4309 loops=3)                                    |
                    Buffers: shared hit=51871 read=11 dirtied=2                                                                                              |
                    ->  Parallel Index Only Scan using t3_pk on t3  (cost=0.43..370689.69 rows=4166608 width=16) (actual time=0.028..1.495 rows=4309 loops=3)|
                          Heap Fetches: 12927                                                                                                                |
                          Buffers: shared hit=131 read=6                                                                                                     |
                    ->  Index Only Scan using t2_pk on t2  (cost=0.43..0.48 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=12927)                    |
                          Index Cond: (t2_nr = t3.t2_nr)                                                                                                     |
                          Heap Fetches: 12927                                                                                                                |
                          Buffers: shared hit=51740 read=5 dirtied=2                                                                                         |
Planning Time: 0.475 ms                                                                                                                                      |
Execution Time: 75.947 ms                                                                                                                                    |

Thanks for your ideas, if we add an index like感谢您的想法,如果我们添加类似的索引

CREATE INDEX t3_t1_nr ON t3(t1_nr);

the "BAD"-Statement will improve a little bit. “BAD”-Statement 会有所改善。

But the final solution for us was to increase the statistics gathered for this tables:但我们的最终解决方案是增加为这些表收集的统计信息:

ALTER TABLE t1 ALTER COLUMN t1_nr SET STATISTICS 10000;
ALTER TABLE t2 ALTER COLUMN t2_nr SET STATISTICS 10000;
ALTER TABLE t3 ALTER COLUMN t1_nr SET STATISTICS 10000;

ANALYZE t1;
ANALYZE t2;
ANALYZE t3;

After this change both SELECTs has more about the same exection time.在此更改之后,两个 SELECT 的执行时间都差不多。 More information can be found here: https://www.postgresql.org/docs/12/planner-stats.html更多信息可以在这里找到: https://www.postgresql.org/docs/12/planner-stats.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM