简体   繁体   English

DELETE查询性能

[英]DELETE query performance

Original query 原始查询

delete B from 
TABLE_BASE B , 
TABLE_INC  I 
where B.ID = I.IDID and B.NUM = I.NUM;

Performanace stats for above query 以上查询的性能统计数据

+-------------------+---------+-----------+
|    Response Time  | SumCPU  | ImpactCPU |
+-------------------+---------+-----------+
|   00:05:29.190000 |   2852  |  319672   |
+-------------------+---------+-----------+

Optimized Query 1 优化查询1

DEL FROM TABLE_BASE WHERE (ID, NUM) IN 
(SELECT ID, NUM FROM TABLE_INC);

Stats for above query 以上查询的统计信息

+-----------------+--------+-----------+
|   QryRespTime   | SumCPU | ImpactCPU |
+-----------------+--------+-----------+
| 00:00:00.570000 |  15.42 |     49.92 |
+-----------------+--------+-----------+

Optimized Query 2 优化查询2

DELETE FROM TABLE_BASE B WHERE EXISTS
(SELECT * FROM TABLE_INC I WHERE B.ID = I.ID AND B.NUM = I.NUM);

Stats for above query 以上查询的统计信息

+-----------------+--------+-----------+
|   QryRespTime   | SumCPU | ImpactCPU |
+-----------------+--------+-----------+
| 00:00:00.400000 |  11.96 |     44.93 |
+-----------------+--------+-----------+

My question - 我的问题 -

  • How/Why does the Optimized Query 1 and 2 significantly affect the performance so much ? 优化查询1和2如何/为何如此显着地影响性能?
  • What is the best practice for such DELETE queries ? 这种DELETE查询的最佳实践是什么?
  • Should I choose Query 1 or Query 2 ? 我应该选择查询1还是查询2? Which one is ideal/better/reliable? 哪一个是理想/更好/可靠的? I feel Query 1 would be ideal because instead of SELECT * I am using SELECT ID,NUM reducing to only two columns but Query 2 is showing better results. 我觉得查询1是理想的,因为我使用SELECT ID,NUM而不是SELECT * SELECT ID,NUM只减少到两列但查询2显示更好的结果。

QUERY 1

 This query is optimized using type 2 profile T2_Linux64, profileid 21.
  1) First, we lock TEMP_DB.TABLE_BASE for write on a
     reserved RowHash to prevent global deadlock.
  2) Next, we lock TEMP_DB_T.TABLE_INC for access, and we
     lock TEMP_DB.TABLE_BASE for write.
  3) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from
          TEMP_DB.TABLE_BASE by way of an all-rows scan
          with no residual conditions into Spool 2 (all_amps), which is
          redistributed by the hash code of (
          TEMP_DB.TABLE_BASE.NUM,
          TEMP_DB.TABLE_BASE.ID) to all AMPs.  Then
          we do a SORT to order Spool 2 by row hash.  The size of Spool
          2 is estimated with low confidence to be 168,480 rows (
          5,054,400 bytes).  The estimated time for this step is 0.03
          seconds.
       2) We do an all-AMPs RETRIEVE step from
          TEMP_DB_T.TABLE_INC by way of an all-rows scan
          with no residual conditions into Spool 3 (all_amps), which is
          redistributed by the hash code of (
          TEMP_DB_T.TABLE_INC.NUM,
          TEMP_DB_T.TABLE_INC.ID) to all AMPs.  Then
          we do a SORT to order Spool 3 by row hash and the sort key in
          spool field1 eliminating duplicate rows.  The size of Spool 3
          is estimated with high confidence to be 5,640 rows (310,200
          bytes).  The estimated time for this step is 0.03 seconds.
  4) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of an
     all-rows scan, which is joined to Spool 3 (Last Use) by way of an
     all-rows scan.  Spool 2 and Spool 3 are joined using an inclusion
     merge join, with a join condition of ("(ID = ID) AND
     (NUM = NUM)").  The result goes into Spool 1 (all_amps),
     which is redistributed by the hash code of (
     TEMP_DB.TABLE_BASE.ROWID) to all AMPs.  Then we do
     a SORT to order Spool 1 by row hash and the sort key in spool
     field1 eliminating duplicate rows.  The size of Spool 1 is
     estimated with no confidence to be 168,480 rows (3,032,640 bytes).
     The estimated time for this step is 1.32 seconds.
  5) We do an all-AMPs MERGE DELETE to
     TEMP_DB.TABLE_BASE from Spool 1 (Last Use) via the
     row id.  The size is estimated with no confidence to be 168,480
     rows.  The estimated time for this step is 42.95 seconds.
  6) We spoil the parser's dictionary cache for the table.
  7) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> No rows are returned to the user as the result of statement 1.

QUERY 2 EXPLAIN PLAN

 This query is optimized using type 2 profile T2_Linux64, profileid 21.
  1) First, we lock TEMP_DB.TABLE_BASE for write on a reserved RowHash to
     prevent global deadlock.
  2) Next, we lock TEMP_DB_T.TABLE_INC for access, and we
     lock TEMP_DB.TABLE_BASE for write.
  3) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from TEMP_DB.TABLE_BASE by way of
          an all-rows scan with no residual conditions into Spool 2
          (all_amps), which is redistributed by the hash code of (
          TEMP_DB.TABLE_BASE.NUM, TEMP_DB.TABLE_BASE.ID) to all AMPs.
          Then we do a SORT to order Spool 2 by row hash.  The size of
          Spool 2 is estimated with low confidence to be 168,480 rows (
          5,054,400 bytes).  The estimated time for this step is 0.03
          seconds.
       2) We do an all-AMPs RETRIEVE step from
          TEMP_DB_T.TABLE_INC by way of an all-rows scan
          with no residual conditions into Spool 3 (all_amps), which is
          redistributed by the hash code of (
          TEMP_DB_T.TABLE_INC.NUM,
          TEMP_DB_T.TABLE_INC.ID) to all AMPs.  Then
          we do a SORT to order Spool 3 by row hash and the sort key in
          spool field1 eliminating duplicate rows.  The size of Spool 3
          is estimated with high confidence to be 5,640 rows (310,200
          bytes).  The estimated time for this step is 0.03 seconds.
  4) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of an
     all-rows scan, which is joined to Spool 3 (Last Use) by way of an
     all-rows scan.  Spool 2 and Spool 3 are joined using an inclusion
     merge join, with a join condition of ("(NUM = NUM) AND
     (ID = ID)").  The result goes into Spool 1 (all_amps), which
     is redistributed by the hash code of (TEMP_DB.TABLE_BASE.ROWID) to all
     AMPs.  Then we do a SORT to order Spool 1 by row hash and the sort
     key in spool field1 eliminating duplicate rows.  The size of Spool
     1 is estimated with no confidence to be 168,480 rows (3,032,640
     bytes).  The estimated time for this step is 1.32 seconds.
  5) We do an all-AMPs MERGE DELETE to TEMP_DB.TABLE_BASE from Spool 1 (Last
     Use) via the row id.  The size is estimated with no confidence to
     be 168,480 rows.  The estimated time for this step is 42.95
     seconds.
  6) We spoil the parser's dictionary cache for the table.
  7) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> No rows are returned to the user as the result of statement 1.

For TABLE_BASE 对于TABLE_BASE

+----------------+----------+
|  table_bytes   | skewness |
+----------------+----------+
| 16842085888.00 |    22.78 |
+----------------+----------+

For TABLE_INC 对于TABLE_INC

+-------------+----------+
| table_bytes | skewness |
+-------------+----------+
|  5317120.00 |    44.52 |
+-------------+----------+

What's the relation between TABLE_BASE and TABLE_INC ? TABLE_BASETABLE_INC之间的关系是什么?

If it's one-to-many Q1 probably creates a huge spool first while Q2&3 might apply DISTINCT before the join. 如果它是一对多的Q1可能会首先创建一个巨大的假脱机,而Q2和3可能会在加入之前应用DISTINCT

Regarding IN vs. EXISTS there should be hardly any difference, did you check dbc.QryLogStepsV? 关于INEXISTS几乎没有任何区别,你检查了dbc.QryLogStepsV吗?

Edit: 编辑:

If (ID,Num) is the PI of the target table rewriting to a MERGE DELETE should provide best performance: 如果(ID,Num)是目标表的PI,则重写为MERGE DELETE应提供最佳性能:

MERGE INTO TABLE_BASE AS tgt
USING TABLE_INC AS src
ON src.ID = tgt.ID,
AND src.Num = tgt.Num
WHEN MATCHED 
THE DELETE

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM