简体   繁体   中英

C# update database table using SQL using values from a DataTable that is not in SQL Server

I have a SQL Server database table that I need to update with values that I have in a C# application DataTable retrieved from an outside source (it could be from file, external API call, etc I guess it is irrelevant).

Is there a way to do a "mass" update using SQL similar to:

UPDATE myTable1 t1
SET    custAddress = t2.address
FROM   myTable2 t2
WHERE  t1.id = t2.id

I suppose I could use the above example as long as both tables are in the same SQL Server database; but in this case, myTable2 is an application (C#) DataTable and not a SQL Server table.

Is there a way to accomplish the update similar to the one above?

I know I could "loop" through each of the application table rows and trigger a SQL update for each row, but if my data table has thousands of rows, I would not want (whenever possible) to have to execute one SQL update for each row that I need to update.

Any help would be appreciated. I am open to solutions or suggestion that involve LINQ or any other C# or SQL logic.

  1. Take the data-table (C#) and convert it to xml. Hint, if you add the data-table to a data-set, you can call ds.GetXml(). (See here )

  2. Push the xml to sql-server.

  3. "Shred" the xml into #temp or @variable table(s) using sql-server-xml functionality.

  4. Perform CUD (create/update/delete) operations using the data in the #temp or @variable tables.

The basic idea is here:

HOW TO: Perform Bulk Updates and Inserts Using OpenXML with .NET Providers in Visual C# .NET

But you don't want to use OPENXML (Sql Server 2000 syntax), instead, use examples for "shredding found here:

http://pratchev.blogspot.com/2007/06/shredding-xml-in-sql-server-2005.html

IMHO : This is a great way to do "set based" operation for CUD (create/update/delete). One of the biggest bonuses of doing it this way is that all the index-updating happens after the set based operation, where if you do it RBAR (row by agonizing row), the index(es) has to rebuild after each single operation. :(

Note, #3 (b) is optional. You have to shred. BUT, you could go straight from xml-shredding to the "real" tables (and skip the #temp or @variable table). I usually test a real world "load" (amount of xml data) and if it doesn't make much difference, I stick with #temp or @variable tables. Its easier to debug later, because you can put in a temp "select * from #myTempTable" and see if the data got shredded correctly. You should also test #temp vs @variable tables as well. Sometimes it makes a difference.

Please note there is an old-old "performance" bug on element-based-xml.

https://connect.microsoft.com/SQLServer/feedback/details/250407/insert-from-nodes-with-element-based-xml-has-poor-performance-on-sp2-with-x64

It may no longer apply. But it is something to be aware of. "back in the day" (because of the injected bug) my team and I had to change our element-based-xml into attribute-based xml (in C#) before sending it down to sql-server.

I think the most straightforward way to accomplish this is to do a bulk insert of your application data using SqlBulkCopy into a staging table. There is also an option to set the batch size if you want to do this in batches.

After the bulk insert, run a stored procedure to do the update. Another option is to use an after insert trigger, but that seems to only make sense if you bulk insert in batches.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM