简体   繁体   English

分页将批量数据插入表中

[英]Paging Insert bulk data into a table

Maybe is a dumb question but how split/paging the insertion to let other operations update the same table. 也许这是一个愚蠢的问题,但是如何拆分/分页插入以让其他操作更新同一张表。

I've two stored procedures, one that insert bulk data 我有两个存储过程,一个存储过程可插入批量数据

Stored procedure InsertIntoMyTable : 存储过程InsertIntoMyTable

INSERT INTO MyTable (column1, Column2, Column3)
SELECT Column1, @Column2, 0
FROM MyOtherTable

Primary key of MyTable is (Column1, Column2) MyTable主键是(Column1, Column2)

And also have a MERGE operation to the same table MyTable but from another source, mostly update (the column3) but also can insert data into MyTable . 并且对同一表MyTable进行MERGE操作,但从另一个来源进行,主要是更新(column3),但也可以将数据插入MyTable

The problem is when the insertion on MyTable takes a lot of time 10 million records, the stored procedure that execute the MERGE must wait until the InsertIntoMyTable finishes. 问题是,当MyTable上的插入花费大量时间1000万条记录时,执行MERGE的存储过程必须等到InsertIntoMyTable完成。

When trying to solve this, added a paginated 尝试解决此问题时,添加了分页

DECLARE @Start INT = 1
DECLARE @End INT = 1000
DECLARE @Amount INT = 1000
DECLARE @Total INT

SELECT @Total = COUNT(Column1) FROM MyOtherTable WHERE Column2 = @Column2

WHILE (@Start<=@Total)
BEGIN
   INSERT INTO MyTable (column1, Column2, Column3)
      SELECT Column1, @Column2, 0
      FROM (SELECT 
               Column1, 
               Row_number() OVER(ORDER BY Column1) rownumber 
            FROM MyOtherTable 
            WHERE Column2 = @Column2) x
      WHERE x.rownumber between @start and @end

   SET @start = @end+1
   SET @End = @End + @Amount
END

but still locking the table until the operation ends. 但仍会锁定表,直到操作结束。

Note: The execution is not in a transaction. 注意:执行不在事务中。

The execution IS in a transaction - if you don't provide an explicit transaction yourself, SQL Server will use an implicit transaction. 执行在事务中进行的-如果您自己不提供显式事务,则SQL Server将使用隐式事务。

And if you have more than 5000 operations ( INSERT , DELETE , UPDATE ) in a single transaction, SQL Server will drop individual row locks and instead do a lock escalation and instead exlusively lock the entire table - so no other operations are possible until that (possibly implicit) transaction has committed (or has been rolled back). 如果你有超过5000个操作( INSERTDELETEUPDATE )在一个事务中,SQL Server将删除个别行锁,而是做一个锁升级 ,而是exlusively锁定 整个表 -所以没有其他的操作都可以,直到(可能隐含的事务已提交(或已回滚)。

The part that inserts in blocks of 1000 rows shouldn't cause a lock escalation - but of course, any rows that's being inserted in the context of that transaction cannot be read or manipulated by another transaction at the same time. 插入1000行的块中的部分不应引起锁升级-但是,当然,在该事务的上下文中插入的任何行都不能同时被另一个事务读取或操纵。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM