简体   繁体   中英

Lock-free, awaitable, exclusive access methods

I have a thread safe class which uses a particular resource that needs to be accessed exclusively. In my assessment it does not make sense to have the callers of various methods block on a Monitor.Enter or await a SemaphoreSlim in order to access this resource.

For instance I have some "expensive" asynchronous initialization. Since it does not make sense to initialize more than once, whether it be from multiple threads or a single one, multiple calls should return immediately (or even throw an exception). Instead one should create, init and then distribute the instance to multiple threads.

UPDATE 1:

MyClass uses two NamedPipes in either direction. The InitBeforeDistribute method is not really initialization, but rather properly setting up a connection in both directions. It does not make sense to make the pipe available to N threads before you have set up the connection. Once it is setup multiple threads can post work, but only one can actually read/write to the stream. My apologies for obfuscating this with poor naming of the examples.

UPDATE 2:

If InitBeforeDistribute implemented a SemaphoreSlim(1, 1) with proper await logic (instead of the interlocked operation throwing an exception), is the Add/Do Square method OK practice? It does not throw a redundant exception (such as in InitBeforeDistribute ), while being lock-free?

The following would be a good bad example:

class MyClass
{
    private int m_isIniting = 0; // exclusive access "lock"
    private volatile bool vm_isInited = false; // vol. because other methods will read it

    public async Task InitBeforeDistribute()
    {
        if (Interlocked.Exchange(ref this.m_isIniting, -1) != 0)
            throw new InvalidOperationException(
                "Cannot init concurrently! Did you distribute before init was finished?");

        try
        {
            if (this.vm_isInited)
                return;

            await Task.Delay(5000)      // init asynchronously
                .ConfigureAwait(false);

            this.vm_isInited = true;
        }
        finally
        {
            Interlocked.Exchange(ref this.m_isConnecting, 0);
        }
    }
}

Some points:

  1. If there is a case where blocking/awaiting access to a lock makes perfect sense, then this example does not (make sense, that is).
  2. Since I need to await in the method, I must use something like a SemaphoreSlim if I where to use a "proper" lock. Forgoing the Semaphore for the example above allows me to not worry about disposing the class once I'm done with it. (I always disliked the idea of disposing an item used by multiple threads. This is a minor positive, for sure.)
  3. If the method is called often there might be some performance benefits, which of course should be measured.

The above example does not make sense in ref. to (3.) so here is another example:

class MyClass
{
    private volatile bool vm_isInited = false; // see above example
    private int m_isWorking = 0; // exclusive access "lock"
    private readonly ConcurrentQueue<Tuple<int, TaskCompletionSource<int>> m_squareWork =
        new ConcurrentQueue<Tuple<int, TaskCompletionSource<int>>();

    public Task<int> AddSquare(int number)
    {
        if (!this.vm_isInited) // see above example
            throw new InvalidOperationException(
                "You forgot to init! Did you already distribute?");

        var work = new Tuple<int, TaskCompletionSource<int>(number, new TaskCompletionSource<int>()
        this.m_squareWork.Enqueue(work);

        Task do = DoSquare();

        return work.Item2.Task;
    }

    private async Task DoSquare()
    {
        if (Interlocked.Exchange(ref this.m_isWorking, -1) != 0)
            return; // let someone else do the work for you

        do
        {
            try
            {
                Tuple<int, TaskCompletionSource<int> work;

                while (this.m_squareWork.TryDequeue(out work))
                {
                    await Task.Delay(5000)      // Limiting resource that can only be
                        .ConfigureAwait(false); // used by one thread at a time.

                    work.Item2.TrySetResult(work.Item1 * work.Item1);
                }
            }
            finally
            {
                Interlocked.Exchange(ref this.m_isWorking, 0);
            }
        } while (this.m_squareWork.Count != 0 &&
            Interlocked.Exchange(ref this.m_isWorking, -1) == 0)
    }
}

Are there some of the specific negative aspects of this "lock-free" example that I should pay attention to?

Most questions relating to "lock-free" code on SO generally advise against it, stating that it is for the "experts". Rarely (I could be wrong on this one) do I see suggestions for books/blogs/etc that one can delve into, should one be so inclined. If there any such resources I should look into, please share. Any suggestions will be highly appreciated!

Update: great article related

.: Creating High-Performance Locks and Lock-free Code (for .NET) :.


  1. The main point about lock-free algorythms is not that they are for experts .
    The main point is Do you really need lock-free algorythm here? I can't understand your logic here:

    Since it does not make sense to initialize more than once, whether it be from multiple threads or a single one, multiple calls should return immediately (or even throw an exception).

    Why can't your users simply wait for a result of initialization, and use your resource after that? If your can, simply use the Lazy<T> class or even Asynchronous Lazy Initialization .

  2. You really should read about consensus number and CAS-operations and why does it matters while implementing your own synchronization primitive.

    In your code your are using the Interlocked.Exchange method, which isn't CAS in real, as it always exchanges the value, and it has a consensus number equal to 2 . This means that the primitive using such construction will work correctly only for 2 threads (not in your situation, but still 2 ).

    I've tried to define is your code works correctly for 3 threads, or there can be some circumstances which lead your application to damaged state, but after 30 minutes I stopped. And any your team member will stop like me after some time trying to understand your code. This is a waste of time, not only yours, but your team. Don't reinvent the wheel until you really have to.

  3. My favorite book in related area is Writing High-Performance .NET Code by Ben Watson, and my favorite blog is Stephen Cleary 's. If you can be more specific about what kind of book are you interested in, I can add some more references.

  4. No locks in program doesn't make your application lock-free . In .NET application you really should not use the Exceptions for your internal program flow. Consider that the initializing thread isn't scheduled for a while by the OS (on various reasons, no matter what they are exactly).

    In this case all other threads in your app will die step by step trying to access your shared resource. I can't say that this is a lock-free code. Yes, there are no locks in it, but it doesn't guarantee the correctness of the program and thus it isn't a lock-free by definition .

The Art of Multiprocessor Programming by Maurice Herlihy and Nir Shavit , is a great resource for lock-free and wait-free programming. lock-free is a progress guarantee other than a mode of programming, so to argue that an algorithm is lock-free, one has to validate or show proofs of the progress guarantee. lock-free in simple terms implies that blocking or halting of one thread doesn't not block progress of other threads or that if a threads is blocked infinitely often, then there is another thread that makes progress infinitely often.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM