简体   繁体   中英

C# manual lock/unlock

I have a function in C# that can be called multiple times from multiple threads and I want it to be done only once so I thought about this:

class MyClass
{
    bool done = false;
    public void DoSomething()
    {
        lock(this)
            if(!done)
            {
                done = true;
                _DoSomething();
            }
    }
}

The problem is _DoSomething takes a long time and I don't want many threads to wait on it when they can just see that done is true.
Something like this can be a workaround:

class MyClass
{
    bool done = false;
    public void DoSomething()
    {
        bool doIt = false;
        lock(this)
            if(!done)
                doIt = done = true;
        if(doIt)
             _DoSomething();
    }
}

But just doing the locking and unlocking manually will be much better.
How can I manually lock and unlock just like the lock(object) does? I need it to use same interface as lock so that this manual way and lock will block each other (for more complex cases).

The lock keyword is just syntactic sugar for Monitor.Enter and Monitor.Exit:

Monitor.Enter(o);
try
{
    //put your code here
}
finally
{
    Monitor.Exit(o);
}

is the same as

lock(o)
{
    //put your code here
}

Thomas suggests double-checked locking in his answer. This is problematic. First off, you should not use low-lock techniques unless you have demonstrated that you have a real performance problem that is solved by the low-lock technique . Low-lock techniques are insanely difficult to get right.

Second, it is problematic because we don't know what "_DoSomething" does or what consequences of its actions we are going to rely on.

Third, as I pointed out in a comment above, it seems crazy to return that the _DoSomething is "done" when another thread is in fact still in the process of doing it. I don't understand why you have that requirement, and I'm going to assume that it is a mistake. The problems with this pattern still exist even if we set "done" after "_DoSomething" does its thing.

Consider the following:

class MyClass 
{
     readonly object locker = new object();
     bool done = false;     
     public void DoSomething()     
     {         
        if (!done)
        {
            lock(locker)
            {
                if(!done)
                {
                    ReallyDoSomething();
                    done = true;
                }
            }
        }
    }

    int x;
    void ReallyDoSomething()
    {
        x = 123;
    }

    void DoIt()
    {
        DoSomething();
        int y = x;
        Debug.Assert(y == 123); // Can this fire?
    }

Is this threadsafe in all possible implementations of C#? I don't think it is. Remember, non-volatile reads may be moved around in time by the processor cache . The C# language guarantees that volatile reads are consistently ordered with respect to critical execution points like locks, and it guarantees that non-volatile reads are consistent within a single thread of execution, but it does not guarantee that non-volatile reads are consistent in any way across threads of execution.

Let's look at an example.

Suppose there are two threads, Alpha and Bravo. Both call DoIt on a fresh instance of MyClass. What happens?

On thread Bravo, the processor cache happens to do a (non-volatile,) fetch of the memory location for x. which contains zero. "done" happens to be on a different page of memory which is not fetched into the cache quite yet.

On thread Alpha at the "same time" on a different processor DoIt calls DoSomething. Thread Alpha now runs everything in there. When thread Alpha is done its work, done is true and x is 123 on Alpha's processor. Thread Alpha's processor flushes those facts back out to main memory.

Thread bravo now runs DoSomething. It reads the page of main memory containing "done" into the processor cache and sees that it is true.

So now "done" is true, but "x" is still zero in the processor cache for thread Bravo. Thread Bravo is not required to invalidate the portion of the cache that contains "x" being zero because on thread Bravo neither the read of "done" nor the read of "x" were volatile reads.

The proposed version of double-checked locking is not actually double-checked locking at all. When you change the double-checked locking pattern you need to start over again from scratch and re-analyze everything .

The way to make this version of the pattern correct is to make at least the first read of "done" into a volatile read. Then the read of "x" will not be permitted to move "ahead" of the volatile read to "done".

You can check the value of done before and after the lock:

    if (!done)
    {
        lock(this)
        {
            if(!done)
            {
                done = true;
                _DoSomething();
            }
        }
    }

This way you won't enter the lock if done is true. The second check inside the lock is to cope with race conditions if two threads enter the first if at the same time.

BTW, you shouldn't lock on this , because it can cause deadlocks. Lock on a private field instead (like private readonly object _syncLock = new object() )

The lock keyword is just syntactic sugar for the Monitor class. Also you could call Monitor.Enter() ,Monitor.Exit() .

But the Monitor class itself has also the functions TryEnter() and Wait() which could help in your situation.

I know this answer comes several years late, but none of the current answers seem to address your actual scenario, which only became apparent after your comment :

The other threads don't need to use any information generated by ReallyDoSomething.

If the other threads don't need to wait for the operation to complete, the second code snippet in your question would work fine. You can optimize it further by eliminating your lock entirely and using an atomic operation instead:

private int done = 0;
public void DoSomething()
{
    if (Interlocked.Exchange(ref done, 1) == 0)   // only evaluates to true ONCE
        _DoSomething();
}

Furthermore, if your _DoSomething() is a fire-and-forget operation, then you might not even need the first thread to wait for it, allowing it to run asynchronously in a task on the thread pool:

int done = 0;

public void DoSomething()
{
    if (Interlocked.Exchange(ref done, 1) == 0)
        Task.Factory.StartNew(_DoSomething);
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM