简体   繁体   English

MemoryCache 线程安全,有必要加锁吗?

[英]MemoryCache Thread Safety, Is Locking Necessary?

For starters let me just throw it out there that I know the code below is not thread safe (correction: might be).首先,让我把它扔出去,我知道下面的代码不是线程安全的(更正:可能是)。 What I am struggling with is finding an implementation that is and one that I can actually get to fail under test.我正在努力寻找一种实现,我实际上可以在测试中失败。 I am refactoring a large WCF project right now that needs some (mostly) static data cached and its populated from a SQL database.我现在正在重构一个大型 WCF 项目,该项目需要缓存一些(主要是)静态数据并从 SQL 数据库填充它。 It needs to expire and "refresh" at least once a day which is why I am using MemoryCache.它需要每天至少过期和“刷新”一次,这就是我使用 MemoryCache 的原因。

I know that the code below should not be thread safe but I cannot get it to fail under heavy load and to complicate matters a google search shows implementations both ways (with and without locks combined with debates whether or not they are necessary.我知道下面的代码不应该是线程安全的,但我不能让它在重负载下失败并使事情复杂化,谷歌搜索显示两种方式的实现(有和没有锁结合辩论是否有必要。

Could someone with knowledge of MemoryCache in a multi threaded environment let me definitively know whether or not I need to lock where appropriate so that a call to remove (which will seldom be called but its a requirement) will not throw during retrieval/repopulation.在多线程环境中了解 MemoryCache 的人能否让我明确地知道我是否需要在适当的地方锁定,以便在检索/重新填充期间不会抛出 remove 调用(很少调用,但它是必需的)。

public class MemoryCacheService : IMemoryCacheService
{
    private const string PunctuationMapCacheKey = "punctuationMaps";
    private static readonly ObjectCache Cache;
    private readonly IAdoNet _adoNet;

    static MemoryCacheService()
    {
        Cache = MemoryCache.Default;
    }

    public MemoryCacheService(IAdoNet adoNet)
    {
        _adoNet = adoNet;
    }

    public void ClearPunctuationMaps()
    {
        Cache.Remove(PunctuationMapCacheKey);
    }

    public IEnumerable GetPunctuationMaps()
    {
        if (Cache.Contains(PunctuationMapCacheKey))
        {
            return (IEnumerable) Cache.Get(PunctuationMapCacheKey);
        }

        var punctuationMaps = GetPunctuationMappings();

        if (punctuationMaps == null)
        {
            throw new ApplicationException("Unable to retrieve punctuation mappings from the database.");
        }

        if (punctuationMaps.Cast<IPunctuationMapDto>().Any(p => p.UntaggedValue == null || p.TaggedValue == null))
        {
            throw new ApplicationException("Null values detected in Untagged or Tagged punctuation mappings.");
        }

        // Store data in the cache
        var cacheItemPolicy = new CacheItemPolicy
        {
            AbsoluteExpiration = DateTime.Now.AddDays(1.0)
        };

        Cache.AddOrGetExisting(PunctuationMapCacheKey, punctuationMaps, cacheItemPolicy);

        return punctuationMaps;
    }

    //Go oldschool ADO.NET to break the dependency on the entity framework and need to inject the database handler to populate cache
    private IEnumerable GetPunctuationMappings()
    {
        var table = _adoNet.ExecuteSelectCommand("SELECT [id], [TaggedValue],[UntaggedValue] FROM [dbo].[PunctuationMapper]", CommandType.Text);
        if (table != null && table.Rows.Count != 0)
        {
            return AutoMapper.Mapper.DynamicMap<IDataReader, IEnumerable<PunctuationMapDto>>(table.CreateDataReader());
        }

        return null;
    }
}

The default MS-provided MemoryCache is entirely thread safe.默认的 MS 提供的MemoryCache是完全线程安全的。 Any custom implementation that derives from MemoryCache may not be thread safe.MemoryCache派生的任何自定义实现可能不是线程安全的。 If you're using plain MemoryCache out of the box, it is thread safe.如果您使用的是开箱即​​用的普通MemoryCache ,则它是线程安全的。 Browse the source code of my open source distributed caching solution to see how I use it (MemCache.cs):浏览我的开源分布式缓存解决方案的源代码,看看我是如何使用它的 (MemCache.cs):

https://github.com/haneytron/dache/blob/master/Dache.CacheHost/Storage/MemCache.cs https://github.com/haneytron/dache/blob/master/Dache.CacheHost/Storage/MemCache.cs

While MemoryCache is indeed thread safe as other answers have specified, it does have a common multi threading issue - if 2 threads try to Get from (or check Contains ) the cache at the same time, then both will miss the cache and both will end up generating the result and both will then add the result to the cache.而其他的答案中指定的MemoryCache确实是线程安全的,但它有一个共同的多线程问题-如果2个线程尝试Get来自(或支票Contains ),同时高速缓存,然后双方将错过高速缓存,都将结束生成结果,然后两者都会将结果添加到缓存中。

Often this is undesirable - the second thread should wait for the first to complete and use its result rather than generating results twice.通常这是不可取的 - 第二个线程应该等待第一个线程完成并使用其结果而不是生成两次结果。

This was one of the reasons I wrote LazyCache - a friendly wrapper on MemoryCache that solves these sorts of issues.这是我编写LazyCache的原因之一——它是 MemoryCache 的一个友好包装器,可以解决这些问题。 It is also available on Nuget .它也可以在Nuget使用

As others have stated, MemoryCache is indeed thread safe.正如其他人所说,MemoryCache 确实是线程安全的。 The thread safety of the data stored within it however, is entirely up to your using's of it.然而,存储在其中的数据的线程安全性完全取决于您对它的使用。

To quote Reed Copsey from his awesome post regarding concurrency and the ConcurrentDictionary<TKey, TValue> type.引用Reed Copsey从他关于并发和ConcurrentDictionary<TKey, TValue>类型的精彩帖子中的话 Which is of course applicable here.这当然适用于这里。

If two threads call this [GetOrAdd] simultaneously, two instances of TValue can easily be constructed.如果两个线程同时调用此 [GetOrAdd],则可以轻松构建 TValue 的两个实例。

You can imagine that this would be especially bad if TValue is expensive to construct.您可以想象,如果TValue构建成本很高,这将特别糟糕。

To work your way around this, you can leverage Lazy<T> very easily, which coincidentally is very cheap to construct.为了解决这个问题,您可以非常轻松地利用Lazy<T> ,巧合的是,它的构建成本非常低。 Doing this ensures if we get into a multithreaded situation, that we're only building multiple instances of Lazy<T> (which is cheap).这样做可以确保如果我们进入多线程情况,我们只构建Lazy<T>多个实例(这很便宜)。

GetOrAdd() ( GetOrCreate() in the case of MemoryCache ) will return the same, singular Lazy<T> to all threads, the "extra" instances of Lazy<T> are simply thrown away. GetOrAdd() GetOrCreate()中的情况下MemoryCache )将返回相同的,单数Lazy<T>到所有线程,的“额外”的实例Lazy<T>被简单地丢弃。

Since the Lazy<T> doesn't do anything until .Value is called, only one instance of the object is ever constructed.由于Lazy<T>.Value被调用之前不会做任何事情,所以只构造了一个对象实例。

Now for some code!现在为一些代码! Below is an extension method for IMemoryCache which implements the above.下面是实现上述内容的IMemoryCache的扩展方法。 It arbitrarily is setting SlidingExpiration based on a int seconds method param.它任意地根据int seconds方法参数设置SlidingExpiration But this is entirely customizable based on your needs.但这完全可以根据您的需要进行定制。

Note this is specific to .netcore2.0 apps请注意,这是特定于 .netcore2.0 应用程序的

public static T GetOrAdd<T>(this IMemoryCache cache, string key, int seconds, Func<T> factory)
{
    return cache.GetOrCreate<T>(key, entry => new Lazy<T>(() =>
    {
        entry.SlidingExpiration = TimeSpan.FromSeconds(seconds);

        return factory.Invoke();
    }).Value);
}

To call:致电:

IMemoryCache cache;
var result = cache.GetOrAdd("someKey", 60, () => new object());

To perform this all asynchronously, I recommend using Stephen Toub's excellent AsyncLazy<T> implementation found in hisarticle on MSDN.为了异步执行这一切,我建议使用Stephen Toub在 MSDN 上的文章中找到 出色的AsyncLazy<T>实现。 Which combines the builtin lazy initializer Lazy<T> with the promise Task<T> :它结合了内置的惰性初始化器Lazy<T>和承诺Task<T>

public class AsyncLazy<T> : Lazy<Task<T>>
{
    public AsyncLazy(Func<T> valueFactory) :
        base(() => Task.Factory.StartNew(valueFactory))
    { }
    public AsyncLazy(Func<Task<T>> taskFactory) :
        base(() => Task.Factory.StartNew(() => taskFactory()).Unwrap())
    { }
}   

Now the async version of GetOrAdd() :现在是GetOrAdd()的异步版本:

public static Task<T> GetOrAddAsync<T>(this IMemoryCache cache, string key, int seconds, Func<Task<T>> taskFactory)
{
    return cache.GetOrCreateAsync<T>(key, async entry => await new AsyncLazy<T>(async () =>
    { 
        entry.SlidingExpiration = TimeSpan.FromSeconds(seconds);

        return await taskFactory.Invoke();
    }).Value);
}

And finally, to call:最后,调用:

IMemoryCache cache;
var result = await cache.GetOrAddAsync("someKey", 60, async () => new object());

Check out this link: http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache(v=vs.110).aspx查看此链接: http : //msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache(v=vs.110).aspx

Go to the very bottom of the page (or search for the text "Thread Safety").转到页面的最底部(或搜索文本“线程安全”)。

You will see:你会看见:

^ Thread Safety ^ 线程安全

This type is thread safe.这种类型是线程安全的。

Just uploaded sample library to address issue for .Net 2.0.刚刚上传了示例库以解决 .Net 2.0 的问题。

Take a look on this repo:看看这个回购:

RedisLazyCache Redis延迟缓存

I'm using Redis cache but it also failover or just Memorycache if Connectionstring is missing.我正在使用 Redis 缓存,但如果缺少 Connectionstring,它也会进行故障转移或仅使用 Memorycache。

It's based on LazyCache library that guarantees single execution of callback for write in an event of multi threading trying to load and save data specially if the callback are very expensive to execute.它基于 LazyCache 库,可保证在多线程尝试加载和保存数据的事件中写入回调的单次执行,特别是如果回调执行起来非常昂贵。

As mentioned by @AmitE at the answer of @pimbrouwers, his example is not working as demonstrated here:正如@AmitE 在@pimbrouwers 的回答中所提到的,他的例子并没有像这里展示的那样工作:

class Program
{
    static async Task Main(string[] args)
    {
        var cache = new MemoryCache(new MemoryCacheOptions());

        var tasks = new List<Task>();
        var counter = 0;

        for (int i = 0; i < 10; i++)
        {
            var loc = i;
            tasks.Add(Task.Run(() =>
            {
                var x = GetOrAdd(cache, "test", TimeSpan.FromMinutes(1), () => Interlocked.Increment(ref counter));
                Console.WriteLine($"Interation {loc} got {x}");
            }));
        }

        await Task.WhenAll(tasks);
        Console.WriteLine("Total value creations: " + counter);
        Console.ReadKey();
    }

    public static T GetOrAdd<T>(IMemoryCache cache, string key, TimeSpan expiration, Func<T> valueFactory)
    {
        return cache.GetOrCreate(key, entry =>
        {
            entry.SetSlidingExpiration(expiration);
            return new Lazy<T>(valueFactory, LazyThreadSafetyMode.ExecutionAndPublication);
        }).Value;
    }
}

Output:输出:

Interation 6 got 8
Interation 7 got 6
Interation 2 got 3
Interation 3 got 2
Interation 4 got 10
Interation 8 got 9
Interation 5 got 4
Interation 9 got 1
Interation 1 got 5
Interation 0 got 7
Total value creations: 10

It seems like GetOrCreate returns always the created entry.似乎GetOrCreate总是返回创建的条目。 Luckily, that's very easy to fix:幸运的是,这很容易解决:

public static T GetOrSetValueSafe<T>(IMemoryCache cache, string key, TimeSpan expiration,
    Func<T> valueFactory)
{
    if (cache.TryGetValue(key, out Lazy<T> cachedValue))
        return cachedValue.Value;

    cache.GetOrCreate(key, entry =>
    {
        entry.SetSlidingExpiration(expiration);
        return new Lazy<T>(valueFactory, LazyThreadSafetyMode.ExecutionAndPublication);
    });

    return cache.Get<Lazy<T>>(key).Value;
}

That works as expected:这按预期工作:

Interation 4 got 1
Interation 9 got 1
Interation 1 got 1
Interation 8 got 1
Interation 0 got 1
Interation 6 got 1
Interation 7 got 1
Interation 2 got 1
Interation 5 got 1
Interation 3 got 1
Total value creations: 1

The cache is threadsafe, but like others have stated, its possible that GetOrAdd will call the func multiple types if call from multiple types.缓存是线程安全的,但就像其他人所说的那样,如果从多种类型调用,GetOrAdd 可能会调用 func 多种类型。

Here is my minimal fix on that这是我对此的最小修复

private readonly SemaphoreSlim _cacheLock = new SemaphoreSlim(1);

and

await _cacheLock.WaitAsync();
var data = await _cache.GetOrCreateAsync(key, entry => ...);
_cacheLock.Release();

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM