简体   繁体   English

SqlBulkCopy *在Azure Sql和C#上非常慢

[英]SqlBulkCopy *VERY* slow on Azure Sql and C#

I am inserting records into sql using SqlBulkCopy and FastMember 我使用SqlBulkCopy和FastMember将记录插入到sql中

Locally I can insert 100k records in about 2 seconds. 在本地我可以在大约2秒内插入100k记录。 When I run this on an Azure web app webjob and an Azure Sql Database it's taking more than 10 minutes and timing out the transaction. 当我在Azure Web应用程序webjob和Azure Sql数据库上运行它时,它需要超过10分钟并超时完成事务。 The table definition is the same, similar amounts of data in the table etc. There are no locks, it's just slow. 表定义是相同的,表中的数据量相似等等。没有锁,它只是很慢。 When I run it locally and try to write to the Azure Sql database it takes > 10 minutes too. 当我在本地运行它并尝试写入Azure Sql数据库时,它也需要> 10分钟。

The actual call is as simple as it could be: 实际的电话会尽可能简单:

using(var bulkCopy = new SqlBulkCopy(connection){DestinationTableName="Table")
using(var reader = ObjectReader.Create(entities, columnList)
{
   await bulkCopy.WriteToServerAsync(reader).ConfigureAwait(false);
}

I've tried removing the transaction using TransactionScope(Suppress) but it's made no difference. 我已经尝试使用TransactionScope(Suppress)删除事务,但它没有任何区别。

Could anyone help in either letting me know what stupid mistake I've made or giving me some hints as to how to diagnose this? 任何人都可以帮助让我知道我犯了什么愚蠢的错误,或者给我一些关于如何诊断它的提示? It's getting really frustrating! 真的很令人沮丧! The difference in time is just so big that I'm sure that I've missed something fundamental here. 时间的差异是如此之大,以至于我确信我错过了一些基本的东西。

You can run below query to verify a high log write percent exist while that workload is running: 您可以运行以下查询以验证该工作负载运行时是否存在高日志写入百分比:

SELECT 
    (COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent'
    ,(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent'
    ,(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats

You can also collect waits associated with that workload using one of the different methods explained on this article. 您还可以收集使用不同的方法之一上解释说,工作负载相关等待这个文章。 You will probably see high IO related waits during execution of that workload. 在执行该工作负载期间,您可能会看到与IO相关的高等待。

To avoid throttling during execution of high IO intensive workloads you can scale up before running them and scale down to the initial tier after the workload finish. 为避免在执行高IO密集型工作负载期间进行限制,您可以在运行它们之前进行扩展,并在工作负载完成后向下扩展到初始层。

ALTER DATABASE [db1] MODIFY (EDITION = 'Premium', SERVICE_OBJECTIVE = 'P15');

Well. 好。 I removed all the indexes, and it made some difference, but the batch was still timing out at 10 minutes. 我删除了所有索引,但它有所不同,但批次仍然在10分钟超时。 I removed the outer ambient transaction scope (rather than using TransactionScope.Suppress ) and all of a sudden the times are looking 'normal' again. 我删除了外部环境事务范围(而不是使用TransactionScope.Suppress ),突然间,时间再次看起来“正常”。 It's taking about 50 seconds to insert, and it's getting closer to maxing the DTU while it's running, whereas before it only got to about 20%. 插入需要大约50秒,并且它在运行时越来越接近DTU,而之前它只有大约20%。

I still have no idea why it works locally in 2s with the ambient transaction, but it'll have to be one to chalk up to experience 我仍然不知道为什么它在环境事务处理中在2s本地工作,但它必须是一个白痴体验

thanks all who answered - and at least pointed me in a good direction to learn! 感谢所有回答的人 - 至少指出了我要学习的好方向!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM