简体   繁体   中英

a pattern for packing incoming parallel requests into one

Suppose we have many randomly incoming threads accessing same resource in parallel. To access the resource thread needs to acquire a lock. If we could pack N incoming threads into one request resource usage would be N times more efficient. Also we need to answer individual request as fast as possible. What is the best way/pattern to do that in C#?

Currently I have something like that:

//batches lock
var ilock = ModifyBatch.GetTableDeleteBatchLock(table_info.Name);
lock (ilock)
{
    // put the request into requests batch
    if (!ModifyBatch._delete_batch.ContainsKey(table_info.Name))
    {
        ModifyBatch._delete_batch[table_info.Name] = new DeleteData() { Callbacks = new List<Action<string>>(), ids = ids };
    }
    else
    {
        ModifyBatch._delete_batch[table_info.Name].ids.UnionWith(ids);
    }
    //this callback will get called once the job is done by a thread that will acquire resource lock
    ModifyBatch._delete_batch[table_info.Name].Callbacks.Add(f =>
    {
        done = true;
        error = f;
    });
}

bool lockAcquired = false;
int maxWaitMs = 60000;
DeleteData _delete_data = null;

//resource lock
var _write_lock = GetTableWriteLock(typeof(T).Name);
try
{
    DateTime start = DateTime.Now;
    while (!done)
    {
        lockAcquired = Monitor.TryEnter(_write_lock, 100);
        if (lockAcquired)
        {
            if (done) //some other thread did our job
                            {
                Monitor.Exit(_write_lock);
                lockAcquired = false;
                break;
            }
            else
            {
                break;
            }
        }
        Thread.Sleep(100);
        if ((DateTime.Now - start).TotalMilliseconds > maxWaitMs)
        {
            throw new Exception("Waited too long to acquire write lock?");
        }
    }
    if (done) //some other thread did our job
    {
        if (!string.IsNullOrEmpty(error))
        {
            throw new Exception(error);
        }
        else
        {
            return;
        }
    }

    //not done, but have write lock for the table
    lock (ilock)
    {
        _delete_data = ModifyBatch._delete_batch[table_info.Name];
        var oval = new DeleteData();
        ModifyBatch._delete_batch.TryRemove(table_info.Name, out oval);
    }
    if (_delete_data.ids.Any())
    {
        //doing the work with resource 
    }
    foreach (var cb in _delete_data.Callbacks)
    {
        cb(null);
    }
}
catch (Exception ex)
{
    if (_delete_data != null)
    {
        foreach (var cb in _delete_data.Callbacks)
        {
            cb(ex.Message);
        }
    }
    throw;
}
finally
{
    if (lockAcquired)
    {
        Monitor.Exit(_write_lock);
    }
}

If it is OK to process the task outside the scope of the current request, ie to queue it for later, then you can think of a sequence like this 1 :

Implement a resource lock (monitor) and a List of tasks.

  1. For each request:

  2. Lock the List, Add current task to the List, remember nr. of tasks in the List, unlock the List.

  3. Try to acquire the lock .

  4. If unsuccessful:

    • If the nr. of tasks in the list < threshold X, then Return.
    • Else Acquire the Lock (will block)
  5. Lock the List, move it's contents to a temp list, unlock the List.

  6. If temp list is not empty

    • Execute the tasks in the temp list.

    • Repeat from step 5.

  7. Release the lock .

The first request will go through the whole sequence. Subsequent requests, if the first is still executing, will short-circuit at step 4.

Tune for the optimal threshold X (or change it to a time-based threshold).


1 If you need to wait for the task in the scope of the request, then you need to extend the process slightly:

Add two fields to the Task class: completion flag and exception .

At step 4, before Returning, wait for the task to complete ( Monitor.Wait ) until its completion flag becomes true . If exception is not null , throw it.

At step 6, for each task, set the completion flag and optionally the exception and then notify the waiters ( Monitor.PulseAll ).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM