简体   繁体   English

我应该使用哪个Java集合来实现线程安全缓存?

[英]Which Java collection should I use to implement a thread-safe cache?

I'm looking to implement a simple cache without doing too much work (naturally). 我正在寻求实现一个简单的缓存而不需要做太多的工作(当然)。 It seems to me that one of the standard Java collections ought to suffice, with a little extra work. 在我看来,标准Java集合之一应该足够了,只需要一点额外的工作。 Specifically, I'm storing responses from a server, and the keys can either be the request URL string or a hash code generated from the URL. 具体来说,我正在存储来自服务器的响应,密钥可以是请求URL字符串,也可以是从URL生成的哈希代码。

I originally thought I'd be able to use a WeakHashMap , but it looks like that method forces me to manage which objects I want to keep around, and any objects I don't manage with strong references are immediately swept away. 我原本以为我能够使用WeakHashMap ,但看起来这种方法迫使我管理我想要保留的对象,而且我没有用强引用管理的任何对象都会立即被扫除。 Should I try out a ConcurrentHashMap of SoftReference values instead? 我应该尝试使用SoftReference值的ConcurrentHashMap吗? Or will those be cleaned up pretty aggressively too? 或者那些也会被积极地清理干净?

I'm now looking at the LinkedHashMap class. 我现在正在查看LinkedHashMap类。 With some modifications it looks promising for an MRU cache. 通过一些修改,它看起来很有希望用于MRU缓存。 Any other suggestions? 还有其他建议吗?

Whichever collection I use, should I attempt to manually prune the LRU values, or can I trust the VM to bias against reclaiming recently accessed objects? 无论我使用哪个集合,我应该尝试手动修剪LRU值,还是可以信任VM偏向回收最近访问过的对象?

FYI, I'm developing on Android so I'd prefer not to import any third-party libraries. 仅供参考,我正在Android上开发,所以我不想导入任何第三方库。 I'm dealing with a very small heap (16 to 24 MB) so the VM is probably pretty eager to reclaim resources. 我正在处理一个非常小的堆(16到24 MB),因此VM可能非常渴望回收资源。 I assume the GC will be aggressive. 我认为GC会很有侵略性。

If you use SoftReference -based keys, the VM will bias (strongly) against recently accessed objects. 如果使用基于SoftReference的键,VM将(强烈)偏向最近访问的对象。 However it would be quite difficult to determine the caching semantics - the only guarantee that a SoftReference gives you (over a WeakReference) is that it will be cleared before an OutOfMemoryError is thrown. 但是这将是非常难以确定缓存语义-唯一保证一个SoftReference的给你(在WeakReference的)是一个前被清除OutOfMemoryError被抛出。 It would be perfectly legal for a JVM implementation to treat them identically to WeakReferences, at which point you might end up with a cache that doesn't cache anything. 对于JVM实现来说,将它们视为与WeakReferences完全相同是完全合法的,此时您最终可能会得到一个不缓存任何内容的缓存。

I don't know how things work on Android, but with Sun's recent JVMs one can tweak the SoftReference behaviour with the -XX:SoftRefLRUPolicyMSPerMB command-line option, which determines the number of milliseconds that a softly-reachable object will be retained for, per MB of free memory in the heap. 我不知道在Android上如何工作,但是使用Sun最近的JVM,可以使用-XX:SoftRefLRUPolicyMSPerMB命令行选项来调整SoftReference行为,该选项确定可以保留软可达对象的毫秒数,堆中的每MB可用内存。 As you can see, this is going to be exceptionally difficult to get any predictable lifespan behaviour out of, with the added pain that this setting is global for all soft references in the VM and can't be tweaked separately for individual classes' use of SoftReferences (chances are each use will want different parameters). 正如您所看到的,这将是非常难以获得任何可预测的生命周期行为,更令人痛苦的是,此设置对于VM中的所有软引用是全局的,并且不能单独调整以用于各个类的使用SoftReferences(每次使用的机会都需要不同的参数)。


The simplest way to make an LRU cache is by extending LinkedHashMap as described here . 制作LRU缓存的最简单方法是扩展LinkedHashMap ,如此处所述 Since you need thread-safety, the simplest way to extend this initially is to just use Collections.synchronizedMap on an instance of this custom class to ensure safe concurrent behaviour. 由于您需要线程安全性,最初扩展此方法的最简单方法是在此自定义类的实例上使用Collections.synchronizedMap以确保安全的并发行为。

Beware premature optimisation - unless you need very high throughput, the theoretically suboptimal overhead of the coarse synchronization is not likely to be an issue. 注意过早优化 - 除非您需要非常高的吞吐量,否则理论上粗略同步的次优开销不太可能成为问题。 And the good news - if profiling shows that you are performing too slowly due to heavy lock contention, you'll have enough information available about the runtime use of your cache that you'll be able to come up with a suitable lockless alternative (probably based on ConcurrentHashMap with some manual LRU treatment) rather than having to guess at its load profile. 好消息 - 如果分析显示由于密码锁争用导致您执行速度太慢,您将获得有关缓存运行时使用的足够信息,您将能够提供合适的无锁替代方案(可能基于ConcurrentHashMap和一些手动LRU处理)而不必猜测其负载配置文件。

LinkedHashMap is easy to use for cache. LinkedHashMap易于用于缓存。 This creates an MRU cache of size 10. 这将创建一个大小为10的MRU缓存。

private LinkedHashMap<File, ImageIcon> cache = new LinkedHashMap<File, ImageIcon>(10, 0.7f, true) {
    @Override
    protected boolean removeEldestEntry(Map.Entry<File, ImageIcon> eldest) {
        return size() > 10;
    }
};

I guess you can make a class with synchronized delegates to this LinkedHashMap. 我猜你可以用这个LinkedHashMap的同步委托创建一个类。 Forgive me if my understanding of synchronization is wrong. 如果我对同步的理解是错误的,请原谅我。

www.javolution.org has some interestig features - synchronized fast collections. www.javolution.org有一些interestig功能 - 同步快速收集。 In your case it worth a try as it offers also some nifty enhancements for small devices as Android ones. 在你的情况下,它值得一试,因为它为Android设备提供了一些小巧的增强功能。

For synchronization, the Collections framework provides a synchronized map: 对于同步, Collections框架提供了一个同步映射:

Map<V,T> myMap = Collections.synchronizedMap(new HashMap<V, T>());

You could then wrap this, or handle the LRU logic in a cache object. 然后,您可以将其包装起来,或者在缓存对象中处理LRU逻辑。

我喜欢Apache Commons Collections LRUMap

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM