简体   繁体   中英

Inserting 40/50 records to Azure table storage sometimes taking more than 30 seconds, hence throwing timeout exception

I have a long-running application which task is to insert data in every 2/3 seconds. Most of the time it works fine. But sometimes I am getting time out exception. I checked every time it is inserting around 50 records. I checked with more load like more than 2000 rows. it works perfectly. Only a few times in a day it throws timeout exception.

Source: Microsoft.WindowsAzure.Storage TargetSite: T EndExecuteAsyncT StackTrace: at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result) at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass2`1.b__0(IAsyncResult ar) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter.GetResult() at smi.Server.Shared.VehicleHistoryLibrary.ATVehicleHistoryContext.d__4.MoveNext()

Here is my code

ThreadPool.SetMinThreads(1024, 256);
ServicePointManager.DefaultConnectionLimit = 256;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
client.DefaultRequestOptions = new TableRequestOptions
            {
                MaximumExecutionTime = TimeSpan.FromSeconds(30), //Timeout requests after 30 seconds
                RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(3), 4),
                LocationMode = LocationMode.PrimaryThenSecondary
            };  
var tableEntityGroups = histories.Select(h => new TrackHistoryTableEntity(h)).GroupBy(e => e.PartitionKey).ToDictionary(g => g.Key, g => g.ToList());
            List<Task> tasks = new List<Task>();
            foreach (var kvp in tableEntityGroups)
            {
                //Merge Track history records with the same FixTaken second into one, taking the average 
                var mergedHistories = kvp.Value.GroupBy(v => v.RowKey).Select(g => new TrackHistoryTableEntity()
                {
                    PartitionKey = g.First().PartitionKey,
                    RowKey = g.First().RowKey,
                    A = g.Select(v => v.A).Average(),
                    N = g.Select(v => v.N).Average(),
                    V = g.Select(v => v.V).Average(),
                    B = g.Select(v => v.B).Average(),
                    D = g.Select(v => v.D).Sum()
                });
                TableBatchOperation batchOperation = new TableBatchOperation();
                foreach (var v in mergedHistories)
                {
                    batchOperation.Add(TableOperation.InsertOrReplace(v));
                    if (batchOperation.Count >= 100)
                    {
                        tasks.Add(TrackHistoryTable.ExecuteBatchAsync(batchOperation));
                        batchOperation = new TableBatchOperation();
                    }
                }
                if (batchOperation.Count > 0)
                {
                    tasks.Add(TrackHistoryTable.ExecuteBatchAsync(batchOperation));
                }

                var splitKey = kvp.Value[0].PartitionKey.Split('_');
                tasks.Add(TrackHistoryTracksTable.ExecuteAsync(TableOperation.InsertOrReplace(new TableEntity(splitKey[0], Int32.Parse(splitKey[1]).ToString()))));

                if (trackPartitionUpdates)
                    tasks.Add(TrackHistoryPartitionUpdatesTable.ExecuteAsync(TableOperation.InsertOrReplace(new TableEntity(TrackHistoryTableEntity.GetHourTimestamp(DateTime.UtcNow).ToString(), kvp.Value[0].PartitionKey))));
            }
            await Task.WhenAll(tasks.ToArray());

Here are a couple of considerations considerations:

  1. [CAUTION] Maximum Processing time SLA for Batch Table Operations is 30 seconds as opposed to 2 seconds for single entity operations. More details are available at https://azure.microsoft.com/en-us/support/legal/sla/storage/v1_5/ .
  2. [BEST PRACTICE] Implement a retry policy (eg preferably exponential retry batching use cases and considering your SLA). More details at https://docs.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#azure-storage .

Hope that helps!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM