繁体   English   中英

多次连接同一张表的性能问题

[英]Performance issue for join on the same tables multiple times

我面临以下查询的性能问题,其中同一张表多次自我连接。 如何避免在同一表上进行多个联接?

INSERT INTO "TEMP"."TABLE2"
SELECT
T1."PRODUCT_SNO"
,T2."PRODUCT_SNO"
,T3."PRODUCT_SNO"
,T4."PRODUCT_SNO"
,((COUNT(DISTINCT T1."ACCESS_METHOD_ID")(FLOAT)) / 
   (MAX(T5.GROUP_NUM(FLOAT))))
FROM
"TEMP"."TABLE1" T1
,"TEMP"."TABLE1" T2
,"TEMP"."TABLE1" T3
,"TEMP"."TABLE1" T4
,"TEMP"."_TWM_GROUP_COUNT" T5
WHERE
      T1."ACCESS_METHOD_ID" = T2."ACCESS_METHOD_ID"
  AND T2."ACCESS_METHOD_ID" = T3."ACCESS_METHOD_ID"
  AND T3."ACCESS_METHOD_ID" = T4."ACCESS_METHOD_ID"
  AND T1."SUBSCRIPTION_DATE" < T2."SUBSCRIPTION_DATE"
  AND T2."SUBSCRIPTION_DATE" < T3."SUBSCRIPTION_DATE"
  AND T3."SUBSCRIPTION_DATE" < T4."SUBSCRIPTION_DATE"
GROUP BY 1, 2, 3, 4;

这需要3个小时才能完成。 下面是它的解释:

1) First, we lock a distinct TEMP."pseudo table" for write on a
     RowHash to prevent global deadlock for
     TEMP.TABLE2. 
  2) Next, we lock a distinct TEMP."pseudo table" for read on a
     RowHash to prevent global deadlock for TEMP.T5. 
  3) We lock TEMP.TABLE2 for write, we lock
     TEMP.TABLE1 for access, and we lock TEMP.T5 for read. 
  4) We do an all-AMPs RETRIEVE step from TEMP.T5 by way of an
     all-rows scan with no residual conditions into Spool 4 (all_amps),
     which is duplicated on all AMPs.  The size of Spool 4 is estimated
     with high confidence to be 48 rows (816 bytes).  The estimated
     time for this step is 0.01 seconds. 
  5) We execute the following steps in parallel. 
       1) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of
          an all-rows scan, which is joined to TEMP.T4 by way of an
          all-rows scan with no residual conditions.  Spool 4 and
          TEMP.T4 are joined using a product join, with a join
          condition of ("(1=1)").  The result goes into Spool 5
          (all_amps), which is built locally on the AMPs.  Then we do a
          SORT to order Spool 5 by the hash code of (
          TEMP.T4.ACCESS_METHOD_ID).  The size of Spool 5 is
          estimated with high confidence to be 8,051,801 rows (
          233,502,229 bytes).  The estimated time for this step is 1.77
          seconds. 
       2) We do an all-AMPs JOIN step from TEMP.T2 by way of a
          RowHash match scan with no residual conditions, which is
          joined to TEMP.T1 by way of a RowHash match scan with no
          residual conditions.  TEMP.T2 and TEMP.T1 are joined
          using a merge join, with a join condition of (
          "(TEMP.T1.ACCESS_METHOD_ID = TEMP.T2.ACCESS_METHOD_ID)
          AND (TEMP.T1.SUBSCRIPTION_DATE <
          TEMP.T2.SUBSCRIPTION_DATE)").  The result goes into Spool
          6 (all_amps), which is built locally on the AMPs.  The size
          of Spool 6 is estimated with low confidence to be 36,764,681
          rows (1,213,234,473 bytes).  The estimated time for this step
          is 4.12 seconds. 
  6) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of a
     RowHash match scan, which is joined to TEMP.T3 by way of a
     RowHash match scan with no residual conditions.  Spool 5 and
     TEMP.T3 are joined using a merge join, with a join condition
     of ("(TEMP.T3.SUBSCRIPTION_DATE < SUBSCRIPTION_DATE) AND
     (TEMP.T3.ACCESS_METHOD_ID = ACCESS_METHOD_ID)").  The result
     goes into Spool 7 (all_amps), which is built locally on the AMPs. 
     The size of Spool 7 is estimated with low confidence to be
     36,764,681 rows (1,360,293,197 bytes).  The estimated time for
     this step is 4.14 seconds. 
  7) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of a
     RowHash match scan, which is joined to Spool 7 (Last Use) by way
     of a RowHash match scan.  Spool 6 and Spool 7 are joined using a
     merge join, with a join condition of ("(SUBSCRIPTION_DATE <
     SUBSCRIPTION_DATE) AND ((ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND
     ((ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND ((ACCESS_METHOD_ID =
     ACCESS_METHOD_ID) AND (ACCESS_METHOD_ID = ACCESS_METHOD_ID ))))"). 
     The result goes into Spool 3 (all_amps), which is built locally on
     the AMPs.  The result spool file will not be cached in memory. 
     The size of Spool 3 is estimated with low confidence to be
     766,489,720 rows (29,893,099,080 bytes).  The estimated time for
     this step is 1 minute and 21 seconds. 
  8) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by
     way of an all-rows scan , grouping by field1 (
     TEMP.T1.PRODUCT_SNO ,TEMP.T2.PRODUCT_SNO
     ,TEMP.T3.PRODUCT_SNO ,TEMP.T4.PRODUCT_SNO
     ,TEMP.T1.ACCESS_METHOD_ID).  Aggregate Intermediate Results
     are computed globally, then placed in Spool 9.  The aggregate
     spool file will not be cached in memory.  The size of Spool 9 is
     estimated with low confidence to be 574,867,290 rows (
     46,564,250,490 bytes).  The estimated time for this step is 6
     minutes and 38 seconds. 
  9) We do an all-AMPs SUM step to aggregate from Spool 9 (Last Use) by
     way of an all-rows scan , grouping by field1 (
     TEMP.T1.PRODUCT_SNO ,TEMP.T2.PRODUCT_SNO
     ,TEMP.T3.PRODUCT_SNO ,TEMP.T4.PRODUCT_SNO).  Aggregate
     Intermediate Results are computed globally, then placed in Spool
     11.  The size of Spool 11 is estimated with low confidence to be
     50,625 rows (3,695,625 bytes).  The estimated time for this step
     is 41.87 seconds. 
 10) We do an all-AMPs RETRIEVE step from Spool 11 (Last Use) by way of
     an all-rows scan into Spool 1 (all_amps), which is redistributed
     by the hash code of (TEMP.T1.PRODUCT_SNO,
     TEMP.T2.PRODUCT_SNO, TEMP.T3.PRODUCT_SNO,
     TEMP.T4.PRODUCT_SNO) to all AMPs.  Then we do a SORT to order
     Spool 1 by row hash.  The size of Spool 1 is estimated with low
     confidence to be 50,625 rows (1,873,125 bytes).  The estimated
     time for this step is 0.04 seconds. 
 11) We do an all-AMPs MERGE into TEMP.TABLE2 from
     Spool 1 (Last Use).  The size is estimated with low confidence to
     be 50,625 rows.  The estimated time for this step is 1 second. 
 12) We spoil the parser's dictionary cache for the table. 
 13) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> No rows are returned to the user as the result of statement 1. 

收集所有必需的统计信息。

我必须承认我不是Teradata的专家,但是我进行了快速检查,您可以使用ANSI JOIN语法。

因此,首先,我重写了您的查询,以便能够理解它:

INSERT INTO 
    "TEMP"."TABLE2"
SELECT
    T1."PRODUCT_SNO",
    T2."PRODUCT_SNO",
    T3."PRODUCT_SNO",
    T4."PRODUCT_SNO",
    ((COUNT(DISTINCT T1."ACCESS_METHOD_ID")(FLOAT)) / 
        (MAX(T5.GROUP_NUM(FLOAT))))
FROM
    "TEMP"."TABLE1" T1
    INNER JOIN "TEMP"."TABLE1" T2 ON T2."ACCESS_METHOD_ID" = T1."ACCESS_METHOD_ID" 
        AND T2."SUBSCRIPTION_DATE" > T1."SUBSCRIPTION_DATE"
    INNER JOIN "TEMP"."TABLE1" T3 ON T3."ACCESS_METHOD_ID" = T2."ACCESS_METHOD_ID" 
        AND T3."SUBSCRIPTION_DATE" > T2."SUBSCRIPTION_DATE"
    INNER JOIN "TEMP"."TABLE1" T4 ON T4."ACCESS_METHOD_ID" = T3."ACCESS_METHOD_ID" 
        AND T4."SUBSCRIPTION_DATE" > T3."SUBSCRIPTION_DATE"
    CROSS JOIN "TEMP"."_TWM_GROUP_COUNT" T5
GROUP BY 
    T1."PRODUCT_SNO",
    T2."PRODUCT_SNO",
    T3."PRODUCT_SNO",
    T4."PRODUCT_SNO";

请注意,其中许多更改只是个人喜好,但其他更改将“使您的查询进入21世纪”; P

现在,我可以阅读您的SQL,可以对您在此处实际尝试实现的目标做出一些假设:

  • 您有一些包含产品的表,每个产品都有序列号,“访问方法”(不知道这是什么?)和订阅日期;
  • 您正在寻找具有相同“访问方法”的产品,然后将它们链接到订购日期顺序中,然后在链中显示每个产品的序列号;
  • 每个链长必须正好为4个产品。 不知道如果一个链中的产品少于或多于4个会发生什么(我可以看到,如果一个链中的产品少于4个,则将其丢弃);
  • 您也有一个度量标准,可以颠覆这种逻辑。 现在,您正在计算每个链上不同的访问方法的数量,并将其除以来自另一个我们一无所知的表的数量。

实际执行的并不多,但是我可以看到一些可以优化的地方:

  • 您只能将_TMW_GROUP_COUNT表用于一件事,即MAX(GROUP_NUM)。 因此,您可以在主查询之前进行处理,然后不再需要这种可能昂贵的JOIN。 我不知道如何使用Teradata做到这一点,但是在其他SQL变体中,您可以将其粘贴到变量中,使用公用表表达式,使用子查询等。如果该表中有很多行,则有优化器将运行查询x次,然后丢弃x-1个结果集的可能性!
  • 任何非等额联接都将效率低下,但似乎并不能避免。 如果您的表未按SUBSCRIPTION_DATE索引,则可能有助于对表中的数据进行预排序,添加数字顺序号(同样在SQL的其他变体中,这将是ROW_NUMBER()OVER(ORDER BY SUBSCRIPTION_DATE)类型的语法然后,您的日期比较可能是数字比较;
  • 显然,索引在这里很重要;
  • 最后,您可以将查询分为多个阶段,从T1到T2联接开始,然后将其用作(T1到T2)到T3联接的基础,依此类推。这可能无济于事,但是值得一试吗?

可能并没有太多帮助,但是如果没有一些示例数据等,实际上还不够。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM