简体   繁体   中英

How do I make a lock that allows only ONE thread to read from the resource?

I have a file that holds an integer ID value. Currently reading the file is protected with ReaderWriterLockSlim as such:

    public int GetId()
    {
        _fileLock.EnterUpgradeableReadLock();
        int id = 0;
        try {
            if(!File.Exists(_filePath))
                CreateIdentityFile();

            FileStream readStream = new FileStream(_filePath, FileMode.Open, FileAccess.Read);
            StreamReader sr = new StreamReader(readStream);
            string line = sr.ReadLine();
            sr.Close();
            readStream.Close();
            id = int.Parse(line);
            return int.Parse(line);
        }
        finally {
            SaveNextId(id);     // increment the id 
            _fileLock.ExitUpgradeableReadLock();
        }
    }

The problem is that subsequent actions after GetId() might fail. As you can see the GetId() method increments the ID every single time, disregarding what happens after it has issued an ID. The issued ID might be left hanging (as said, exceptions might occur). As the ID is incremented, some IDs might be left unused.

So I was thinking of moving the SaveNextId(id) out, remove it (the SaveNextId() actually uses the lock too, except that it's EnterWriteLock ). And call it manually from outside after all the required methods have executed. That brings out another problem - multiple threads might enter the GetId() method before the SaveNextId() gets executed and they might all receive the same ID.

I don't want any solutions where I have to alter the IDs after the operation, correcting them in any way because that's not nice and might lead to more problems.

I need a solution where I can somehow callback into the FileIdentityManager (that's the class that handles these IDs) and let the manager know that it can perform the saving of the next ID and then release the read lock on the file containing the ID.

Essentialy I want to replicate the relational databases autoincrement behaviour - if anything goes wrong during row insertion, the ID is not used, it is still available for use but it also never happens that the same ID is issued. Hopefully the question is understandable enough for you to provide some solutions..

UPDATE: Please see the comments to the answers for more details about the behaviour I want

    private static readonly object _lock = new object();

    public int GetId()   
    {
      lock(_lock)
      {
        //You code to get ID here
      }
    }

Essentialy I want to replicate the relational databases autoincrement behaviour - if anything goes wrong during row insertion, the ID is not used, it is still available for use but it also never happens that the same ID is issued. Hopefully the question is understandable enough for you to provide some solutions.

Generally speaking that is not the behavior that I've observed. When you insert a row into table with an autoincrement inside a transaction and its rolled back you've lost the ID.

So in my opinion the way you've implemented this is the correct behavior.

Update The only way you can ensure that you "don't want to waste them on unsuccesful file saves, unsuccesful type casts, etc." Is to change the scope of your blocking code to block from the moment you request a new ID until your save is complete and on failure to rollback the increment to the ID.

This will drastically reduce the level of parallelism you can achieve.

If you want to keep the potential for parallelism higher, you should check everything you can before you request an ID eg check types and format errors.

Obviously some things like external errors (IO exceptions) you simply cannot do anything about.

The behavior you see in your database is possible because the ID generation and row insertion are atomic. If you want to have this behavior in your application, then I suggest you only get the ID immediately before storing the data. This will reduce your "transaction scope" to the minimum possible window, and should prevent any exceptions from interfering.

If for some bad reason this isn't possible, an alternative might be to have an "ID broker" that caches the ID counter. It would read the current counter from the file, increment it by some number (say 100), then hand out successive IDs to all callers through a single-threaded method. When it has handed out all 100, it updates the file again. At shutdown, it would write the file one last time, using the last value it handed out. The only problem then is if your system crashes you wind up with a gap in your IDs, but there are ways to compensate for that.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM