简体   繁体   English

将 40/50 条记录插入 Azure 表存储有时需要超过 30 秒,因此会引发超时异常

[英]Inserting 40/50 records to Azure table storage sometimes taking more than 30 seconds, hence throwing timeout exception

I have a long-running application which task is to insert data in every 2/3 seconds.我有一个长时间运行的应用程序,其任务是每 2/3 秒插入一次数据。 Most of the time it works fine.大多数时候它工作正常。 But sometimes I am getting time out exception.但有时我会遇到超时异常。 I checked every time it is inserting around 50 records.每次插入大约 50 条记录时,我都会进行检查。 I checked with more load like more than 2000 rows.我检查了更多的负载,比如超过 2000 行。 it works perfectly.它完美地工作。 Only a few times in a day it throws timeout exception.一天只有几次它会抛出超时异常。

Source: Microsoft.WindowsAzure.Storage TargetSite: T EndExecuteAsyncT StackTrace: at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result) at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass2`1.b__0(IAsyncResult ar) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter.GetResult() at smi.Server.Shared.VehicleHistoryLibrary.ATVehicleHistoryContext.d__4.MoveNext()来源:Microsoft.WindowsAzure.Storage 目标站点:T EndExecuteAsyncT StackTrace:在 Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult 结果)在 Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass2 `1.b__0(IAsyncResult ar) --- 在 System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification 的 System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 处从先前引发异常的位置结束堆栈跟踪(任务任务) 在 System.Runtime.CompilerServices.TaskAwaiter.GetResult() 在 smi.Server.Shared.VehicleHistoryLibrary.ATVehicleHistoryContext.d__4.MoveNext()

Here is my code这是我的代码

ThreadPool.SetMinThreads(1024, 256);
ServicePointManager.DefaultConnectionLimit = 256;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
client.DefaultRequestOptions = new TableRequestOptions
            {
                MaximumExecutionTime = TimeSpan.FromSeconds(30), //Timeout requests after 30 seconds
                RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(3), 4),
                LocationMode = LocationMode.PrimaryThenSecondary
            };  
var tableEntityGroups = histories.Select(h => new TrackHistoryTableEntity(h)).GroupBy(e => e.PartitionKey).ToDictionary(g => g.Key, g => g.ToList());
            List<Task> tasks = new List<Task>();
            foreach (var kvp in tableEntityGroups)
            {
                //Merge Track history records with the same FixTaken second into one, taking the average 
                var mergedHistories = kvp.Value.GroupBy(v => v.RowKey).Select(g => new TrackHistoryTableEntity()
                {
                    PartitionKey = g.First().PartitionKey,
                    RowKey = g.First().RowKey,
                    A = g.Select(v => v.A).Average(),
                    N = g.Select(v => v.N).Average(),
                    V = g.Select(v => v.V).Average(),
                    B = g.Select(v => v.B).Average(),
                    D = g.Select(v => v.D).Sum()
                });
                TableBatchOperation batchOperation = new TableBatchOperation();
                foreach (var v in mergedHistories)
                {
                    batchOperation.Add(TableOperation.InsertOrReplace(v));
                    if (batchOperation.Count >= 100)
                    {
                        tasks.Add(TrackHistoryTable.ExecuteBatchAsync(batchOperation));
                        batchOperation = new TableBatchOperation();
                    }
                }
                if (batchOperation.Count > 0)
                {
                    tasks.Add(TrackHistoryTable.ExecuteBatchAsync(batchOperation));
                }

                var splitKey = kvp.Value[0].PartitionKey.Split('_');
                tasks.Add(TrackHistoryTracksTable.ExecuteAsync(TableOperation.InsertOrReplace(new TableEntity(splitKey[0], Int32.Parse(splitKey[1]).ToString()))));

                if (trackPartitionUpdates)
                    tasks.Add(TrackHistoryPartitionUpdatesTable.ExecuteAsync(TableOperation.InsertOrReplace(new TableEntity(TrackHistoryTableEntity.GetHourTimestamp(DateTime.UtcNow).ToString(), kvp.Value[0].PartitionKey))));
            }
            await Task.WhenAll(tasks.ToArray());

Here are a couple of considerations considerations:以下是一些注意事项:

  1. [CAUTION] Maximum Processing time SLA for Batch Table Operations is 30 seconds as opposed to 2 seconds for single entity operations. [注意]批处理表操作最大处理时间 SLA 为30 秒,而单实体操作为2 秒 More details are available at https://azure.microsoft.com/en-us/support/legal/sla/storage/v1_5/ .更多详细信息,请访问https://azure.microsoft.com/en-us/support/legal/sla/storage/v1_5/
  2. [BEST PRACTICE] Implement a retry policy (eg preferably exponential retry batching use cases and considering your SLA). [最佳实践]实施重试策略(例如,最好是指数重试批处理用例并考虑您的SLA)。 More details at https://docs.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#azure-storage . https://docs.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#azure-storage中的更多详细信息。

Hope that helps!希望有帮助!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM