简体   繁体   English

从表中删除500条记录以及100万条记录不应该花这么长时间

[英]deleting 500 records from table with 1 million records shouldn't take this long

I hope someone can help me. 我希望有一个人可以帮助我。 I have a simple sql statement 我有一个简单的sql语句

delete from sometable 
where tableidcolumn in (...)

I have 500 records I want to delete and recreate. 我有500条要删除和重新创建的记录。 The table recently grew to over 1 mill records. 该表最近增长到超过1个工厂记录。 The problem is the statement above is taking over 5 minutes without completing. 问题是上面的声明需要5分钟以上才能完成。 I have a primary key and 2 non clustered non unique indexes. 我有一个主键和2个非聚集非唯一索引。 My delete statement is using the primary key. 我的删除语句正在使用主键。

Can anyone help me understand why this statement is taking so long and how I can speed it up? 谁能帮助我了解为什么此声明花了这么长时间以及如何加快它的执行速度?

There are two areas I would look at first, locking and a bad plan. 我首先要看两个方面,锁定和错误的计划。

Locking - run your query and while it is running see if it is being blocked by anything else "select * from sys.dm_exec_requests where blocking_session_id <> 0" if you see anything blocking your request then I would start with looking at: 锁定-运行查询,并在查询运行时查看它是否被其他任何东西阻塞,“如果您发现有任何东西阻塞了您的请求,那么请从sys.dm_exec_requests中选择*,其中blocking_session_id <> 0”,那么我将从以下内容开始:

https://www.simple-talk.com/sql/database-administration/the-dba-as-detective-troubleshooting-locking-and-blocking/ https://www.simple-talk.com/sql/database-administration/the-dba-as-detective-troubleshooting-locking-and-blocking/

If there is no locking then get the execution plan for the insert, what is it doing? 如果没有锁定,则获取插入的执行计划,它在做什么? it it exceptionally high? 它异常高?

Other than that, how long do you expect it to take? 除此之外,您预计需要多长时间? Is it a little bit longer than that or a lot longer? 比这更长或更长时间吗? Did it only get so slow after it grew significantly or has it been getting slower over a long period of time? 它只是在显着增长后才变得如此缓慢还是在很长一段时间内变得越来越慢?

What is the I/O performance, what are your average read / write times etc etc. I / O性能如何,您的平均读写时间等是多少?

TL;DR: Don't do that (instead of a big 'in' clause: preload and use a temporary table). TL; DR:不要那样做(而不是大的“ in”子句:预加载并使用临时表)。

With the number of parameters, unknown backend configuration (even though it should be fine by today's standards) and not able to guess what your in-memory size may be during processing, you may be hitting (in order) a stack, batch or memory size limit, starting with this answer . 由于参数数量众多,后端配置未知(即使按照今天的标准应该可以解决),并且无法猜测处理期间的内存大小,因此您可能正在(按顺序)访问堆栈,批处理或内存大小限制, 从此答案开始。 Also possible to hit an instruction size limit . 也有可能达到指令大小限制

The troubleshooting comments may lead you to another answer. 故障排除注释可能会导致您得到另一个答案。 My pivot's the 'in' clause, statement size, and that all of these links include advice to preload a temporary table and use that with your query. 我的中心点是'in'子句,语句大小,并且所有这些链接都包括建议以预加载临时表并将其与查询一起使用。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM