简体   繁体   English

将大型Nvarchar(max)表从一个数据库复制到另一个数据库

[英]copy large Nvarchar(max) table from one db to another

I have a table with more then 60 fields of Nvarchar(max) and i need to copy the table to another db with ssis. 我有一个具有Nvarchar(max)的60多个字段的表,我需要将该表复制到具有sis的另一个db。

i disabled all the index's and it's helped a bit, but still it's take a lot of time to copy all the table (more then 1M rows) - it's take an hour for this moment. 我禁用了所有索引,这有所帮助,但是复制所有表(超过1M行)仍然需要花费很多时间-这需要一个小时。

Do anyone have an idea to make it run faster? 有没有人想让它运行更快?

In my experience, the fastest way to load large data is to use bulk insert. 以我的经验,加载大数据的最快方法是使用批量插入。 This is usually done from a flat file, so you have to export the data to a file first, before performing the bulk insert. 通常,这是从平面文件完成的,因此在执行批量插入之前,您必须先将数据导出到文件中。

If you don't like the flat file approach, the alternative is to write a CLR routine. 如果您不喜欢平面文件方法,则可以选择编写CLR例程。 You can use a SqlDataReader to read from table storing the data in a DataTable and then using SqlBulkCopy to do the bulk insert. 您可以使用SqlDataReader从将数据存储在DataTable中的表中读取,然后使用SqlBulkCopy进行批量插入。 The advantage of this approach is that you can use a buffer system to limit the memory required. 这种方法的优点是可以使用缓冲系统来限制所需的内存。 The SqlDataReader is memory efficient, it reads lines as required and then drops them from memory, the DataTable is not. SqlDataReader的内存效率很高,它会根据需要读取行,然后将其从内存中删除,而DataTable则不是。 So you set yourself a limit, say 10,000 rows, and whenever the DataTable hits this limit you perform SqlBulkCopy and delete the existing rows in the DataTable and carry on with the next batch. 因此,您为自己设置了一个限制,例如10,000行,并且只要DataTable达到此限制,就执行SqlBulkCopy并删除DataTable中的现有行,然后继续进行下一批处理。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM