I have a long-running application which task is to insert data in every 2/3 seconds. Most of the time it works fine. But sometimes I am getting time out exception. I checked every time it is inserting around 50 records. I checked with more load like more than 2000 rows. it works perfectly. Only a few times in a day it throws timeout exception.
Source: Microsoft.WindowsAzure.Storage TargetSite: T EndExecuteAsyncT StackTrace: at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result) at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass2`1.b__0(IAsyncResult ar) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter.GetResult() at smi.Server.Shared.VehicleHistoryLibrary.ATVehicleHistoryContext.d__4.MoveNext()
Here is my code
ThreadPool.SetMinThreads(1024, 256);
ServicePointManager.DefaultConnectionLimit = 256;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
client.DefaultRequestOptions = new TableRequestOptions
{
MaximumExecutionTime = TimeSpan.FromSeconds(30), //Timeout requests after 30 seconds
RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(3), 4),
LocationMode = LocationMode.PrimaryThenSecondary
};
var tableEntityGroups = histories.Select(h => new TrackHistoryTableEntity(h)).GroupBy(e => e.PartitionKey).ToDictionary(g => g.Key, g => g.ToList());
List<Task> tasks = new List<Task>();
foreach (var kvp in tableEntityGroups)
{
//Merge Track history records with the same FixTaken second into one, taking the average
var mergedHistories = kvp.Value.GroupBy(v => v.RowKey).Select(g => new TrackHistoryTableEntity()
{
PartitionKey = g.First().PartitionKey,
RowKey = g.First().RowKey,
A = g.Select(v => v.A).Average(),
N = g.Select(v => v.N).Average(),
V = g.Select(v => v.V).Average(),
B = g.Select(v => v.B).Average(),
D = g.Select(v => v.D).Sum()
});
TableBatchOperation batchOperation = new TableBatchOperation();
foreach (var v in mergedHistories)
{
batchOperation.Add(TableOperation.InsertOrReplace(v));
if (batchOperation.Count >= 100)
{
tasks.Add(TrackHistoryTable.ExecuteBatchAsync(batchOperation));
batchOperation = new TableBatchOperation();
}
}
if (batchOperation.Count > 0)
{
tasks.Add(TrackHistoryTable.ExecuteBatchAsync(batchOperation));
}
var splitKey = kvp.Value[0].PartitionKey.Split('_');
tasks.Add(TrackHistoryTracksTable.ExecuteAsync(TableOperation.InsertOrReplace(new TableEntity(splitKey[0], Int32.Parse(splitKey[1]).ToString()))));
if (trackPartitionUpdates)
tasks.Add(TrackHistoryPartitionUpdatesTable.ExecuteAsync(TableOperation.InsertOrReplace(new TableEntity(TrackHistoryTableEntity.GetHourTimestamp(DateTime.UtcNow).ToString(), kvp.Value[0].PartitionKey))));
}
await Task.WhenAll(tasks.ToArray());
Here are a couple of considerations considerations:
Hope that helps!
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.