简体   繁体   English

使用SQLBulkCopy C#的200,000行或信息

[英]200,000 rows or info with SQLBulkCopy c#

I have a database table with 3 columns, all strings. 我有一个包含3列(所有字符串)的数据库表。 I must upload 200,000 rows of information. 我必须上传200,000行信息​​。 I can do this but as you would expect it is taking much too long, like 30+ minutes. 我可以这样做,但是正如您所期望的那样,它花费的时间太长,例如30分钟以上。

I am now trying to use SQLBulkCopy to do it faster but I cannot understand how to do it correctly. 我现在正在尝试使用SQLBulkCopy来更快地执行此操作,但我不知道如何正确执行此操作。 A sample of a row of data would be: 一行数据的样本为:

"test string","test string,"test string"

Should I write my data to a temp file so SQLBulkCopy can use it to upload the data? 我应该将数据写入临时文件,以便SQLBulkCopy可以使用它上传数据吗? Like have each line represent a row and deliminate the data by comma's? 就像每行都代表一行并用逗号分隔数据一样吗?

Here is what I have so far, any help would be great! 到目前为止,这是我所拥有的,任何帮助都将是很大的!

//this method gets a list of the data objects passed in

 List<string[]> listOfRows = new List<string[]>(); // holds all the rows to be bulk copied to the data table in th database

foreach (DataUnit dataUnit in dataUnitList)
{         
 string[] row = new string[2];
 row[0] = dataUnit.value1.ToString();
 row[1] = dataUnit.value2.ToString();
 row[2] = dataUnit.value3.ToString() ;
 listOfRows.Add(row);
 }

File.Create(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "Test200kFile.txt"));

using (System.IO.StreamWriter file = new System.IO.StreamWriter(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "Test200kFile.txt")))
             {

                 foreach(string[] array in listOfRows)
                 {
                     file.Write(array[0] + "," + array[1] + "," + array[2]);
                 }
             }

             using (MyFileDataReader reader = new MyFileDataReader(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "Test200kFile.txt")))
             {
                 SqlBulkCopy bulkCopy = new SqlBulkCopy("my connection string to database");
                 bulkCopy.DestinationTableName = "my table name";
                 bulkCopy.BatchSize = 20000; //what if its less than 200,000?

                 bulkCopy.WriteToServer(reader);

                 bulkCopy.Close();

             }  

The bulk insert T-SQL requiers that the source data is present in a file on the DB server. 批量插入T-SQL要求源数据存在于DB服务器上的文件中。 That's not the way you want to go if you write code. 如果您编写代码,那不是您想要的方式。

Instead you should create an adapter around whatever data source you have. 相反,您应该围绕拥有的任何数据源创建一个适配器。 The adapter should implement parts of the IDataReader interface - SqlBulkCopy only make use of a few the methods of IDataReader , so only those need to be implemented: 适配器应实现IDataReader接口的一部分SqlBulkCopy仅使用IDataReader的一些方法,因此仅需要实现这些方法:

Properties 性质

  • FieldCount 场数

Methods 方法

  • GetName 取名
  • GetOrdinal GetOrdinal
  • GetValue 取值
  • Read

(source: http://www.michaelbowersox.com/2011/12/22/using-a-custom-idatareader-to-stream-data-into-a-database/ ) (来源: http//www.michaelbowersox.com/2011/12/22/using-a-custom-idatareader-to-stream-data-into-a-database/

So unless your data already is on file, there is no need to write it to file. 因此,除非您的数据已在文件中,否则无需将其写入文件。 Just make sure that you can present a record at a time through the IDataReader interface. 只要确保您可以通过IDataReader界面一次显示一条记录即可。

Avoiding to save the data to file saves a lot of resources and time. 避免将数据保存到文件可节省大量资源和时间。 I once implemented an import job that read XML data from an FTP data source through an XmlReader, which was then wrapped in a custom IDataReader . 我曾经实现一个导入作业,该作业通过XmlReader从FTP数据源读取XML数据,然后将其包装在自定义IDataReader That way I could stream the data from the ftp server, through the app server, to the DB server without having to write the entire data set to disk on the app server. 这样,我可以将数据从ftp服务器通过应用程序服务器流式传输到数据库服务器,而不必将整个数据集写入应用程序服务器上的磁盘。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM