简体   繁体   中英

Distributed critical section in web-farm

I have about 50 web-sites, load-balanced across 5 web-servers. They all use Enterprise Library Caching, and access the same Caching database. The items in the Caching database are refreshed every few hours, using an ICacheItemRefreshAction implementation.

I want to guarantee that only one web-site ever refreshes the cache, by putting the refresh code in a critical section .

  • If the web-sites were running in a single app-pool on a single server, I could use a lock()

  • If the web-sites were running in separate app-pools on a single server, I could use a Mutex .

However, these will not ensure the critical section across multiple web-servers.

Currently, I am creating a new key in the caching database to act as a mutex. This will generally work, but I can see a slim chance that 2 processes could enter the critical section.

public class TakeLongTimeToRefresh : ICacheItemRefreshAction
{
    #region ICacheItemRefreshAction Members

    public void Refresh(string removedKey, object expiredValue, CacheItemRemovedReason removalReason)
    {
        string lockKey = "lockKey";
        ICacheManager cm = CacheFactory.GetCacheManager();

        if (!cm.Contains(lockKey))
        {
            Debug.WriteLine("Entering critical section");
            // Add a lock-key which will never expire for synchronisation.
            // I can see a small window of opportunity for another process to enter
            // the critical section here...
            cm.Add(lockKey, lockKey, 
                   CacheItemPriority.NotRemovable, null, 
                   new NeverExpired());

            object newValue = SomeLengthyWebserviceCall();
            cm.Remove(removedKey);
            Utilities.AddToCache(removedKey, newValue);

            cm.Remove("lockkey");
        }
    }
}

Is there a way of having a guaranteed critical section to ensure I don't call the web-service twice?

EDIT I should add that I can't use a shared file, as the deployment policies will prevent it.

StackOverflow references:

You have to involve some external lock acquisiton common to all. For example, a table t in SQL with one row and one lock field where you will acquire a lock with:

set transaction isolation serializable;
update t set lock = 1 where lock = 0;

check rows affected and if its 1 you have the lock, release it by updating lock to 0. This essentially piggybacks on SQLServer's row lock, if two start at the same time only one will gain U lock after S lock, the other one will block and subsequently return 0 rows affected (since the first transaction flipped it to 1).

I suggest you move the logic for creating/returning a lock handle to database and combine them and this will guarantee it is always one process having the lock.

So the database could have a stored procedure which you ask for a lock, and either it will return empty result (unsuccessful) or will create a record and return it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM