简体   繁体   English

您将如何在 Java 中实现 LRU 缓存?

[英]How would you implement an LRU cache in Java?

Please don't say EHCache or OSCache, etc. Assume for purposes of this question that I want to implement my own using just the SDK (learning by doing).请不要说 EHCache 或 OSCache 等。假设出于这个问题的目的,我想仅使用 SDK 来实现我自己的(边做边学)。 Given that the cache will be used in a multithreaded environment, which datastructures would you use?鉴于缓存将用于多线程环境,您会使用哪种数据结构? I've already implemented one using LinkedHashMap and Collections#synchronizedMap , but I'm curious if any of the new concurrent collections would be better candidates.我已经使用LinkedHashMapCollections#synchronizedMap实现了一个,但我很好奇是否有任何新的并发集合是更好的候选者。

UPDATE: I was just reading through Yegge's latest when I found this nugget:更新:当我发现这个金块时,我正在阅读Yegge 的最新文章

If you need constant-time access and want to maintain the insertion order, you can't do better than a LinkedHashMap, a truly wonderful data structure.如果您需要恒定时间访问并希望维护插入顺序,那么您不能比 LinkedHashMap 做得更好,这是一个真正美妙的数据结构。 The only way it could possibly be more wonderful is if there were a concurrent version.它可能更精彩的唯一方法是如果有一个并发版本。 But alas.可惜。

I was thinking almost exactly the same thing before I went with the LinkedHashMap + Collections#synchronizedMap implementation I mentioned above.在我使用上面提到的LinkedHashMap + Collections#synchronizedMap实现之前,我的想法几乎完全相同。 Nice to know I hadn't just overlooked something.很高兴知道我没有忽略一些东西。

Based on the answers so far, it sounds like my best bet for a highly concurrent LRU would be to extend ConcurrentHashMap using some of the same logic that LinkedHashMap uses.根据目前的答案,对于高度并发的 LRU,我的最佳选择是使用LinkedHashMap使用的某些相同逻辑扩展ConcurrentHashMap

I like lots of these suggestions, but for now I think I'll stick with LinkedHashMap + Collections.synchronizedMap .我喜欢很多这些建议,但现在我想我会坚持使用LinkedHashMap + Collections.synchronizedMap If I do revisit this in the future, I'll probably work on extending ConcurrentHashMap in the same way LinkedHashMap extends HashMap .如果我将来重新审视这个,我可能会以与LinkedHashMap扩展HashMap相同的方式来扩展ConcurrentHashMap

UPDATE:更新:

By request, here's the gist of my current implementation.根据要求,这是我当前实现的要点。

private class LruCache<A, B> extends LinkedHashMap<A, B> {
    private final int maxEntries;

    public LruCache(final int maxEntries) {
        super(maxEntries + 1, 1.0f, true);
        this.maxEntries = maxEntries;
    }

    /**
     * Returns <tt>true</tt> if this <code>LruCache</code> has more entries than the maximum specified when it was
     * created.
     *
     * <p>
     * This method <em>does not</em> modify the underlying <code>Map</code>; it relies on the implementation of
     * <code>LinkedHashMap</code> to do that, but that behavior is documented in the JavaDoc for
     * <code>LinkedHashMap</code>.
     * </p>
     *
     * @param eldest
     *            the <code>Entry</code> in question; this implementation doesn't care what it is, since the
     *            implementation is only dependent on the size of the cache
     * @return <tt>true</tt> if the oldest
     * @see java.util.LinkedHashMap#removeEldestEntry(Map.Entry)
     */
    @Override
    protected boolean removeEldestEntry(final Map.Entry<A, B> eldest) {
        return super.size() > maxEntries;
    }
}

Map<String, String> example = Collections.synchronizedMap(new LruCache<String, String>(CACHE_SIZE));

如果我今天从头开始再次这样做,我会使用 Guava 的CacheBuilder

This is round two.这是第二轮。

The first round was what I came up with then I reread the comments with the domain a bit more ingrained in my head.第一轮是我想出来的,然后我重新阅读了域的评论,在我的脑海中更加根深蒂固。

So here is the simplest version with a unit test that shows it works based on some other versions.因此,这是带有单元测试的最简单版本,表明它基于其他一些版本工作。

First the non-concurrent version:首先是非并发版本:

import java.util.LinkedHashMap;
import java.util.Map;

public class LruSimpleCache<K, V> implements LruCache <K, V>{

    Map<K, V> map = new LinkedHashMap (  );


    public LruSimpleCache (final int limit) {
           map = new LinkedHashMap <K, V> (16, 0.75f, true) {
               @Override
               protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
                   return super.size() > limit;
               }
           };
    }
    @Override
    public void put ( K key, V value ) {
        map.put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map.get(key);
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        V value =  map.get ( key );
        if (value!=null) {
            map.remove ( key );
            map.put(key, value);
        }
        return value;
    }

    @Override
    public void remove ( K key ) {
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }


}

The true flag will track the access of gets and puts. true 标志将跟踪获取和放置的访问。 See JavaDocs.请参阅 JavaDocs。 The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).没有构造函数的 true 标志的 removeEdelstEntry 只会实现一个 FIFO 缓存(参见下面关于 FIFO 和 removeEldestEntry 的注释)。

Here is the test that proves it works as an LRU cache:这是证明它作为 LRU 缓存工作的测试:

public class LruSimpleTest {

    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        if ( !ok ) die ();

    }

Now for the concurrent version...现在对于并发版本...

package org.boon.cache;包 org.boon.cache;

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {

    final CacheMap<K, V>[] cacheRegions;


    private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
        private final ReadWriteLock readWriteLock;
        private final int limit;

        CacheMap ( final int limit, boolean fair ) {
            super ( 16, 0.75f, true );
            this.limit = limit;
            readWriteLock = new ReentrantReadWriteLock ( fair );

        }

        protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
            return super.size () > limit;
        }


        @Override
        public V put ( K key, V value ) {
            readWriteLock.writeLock ().lock ();

            V old;
            try {

                old = super.put ( key, value );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return old;

        }


        @Override
        public V get ( Object key ) {
            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.get ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;
        }

        @Override
        public V remove ( Object key ) {

            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.remove ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public V getSilent ( K key ) {
            readWriteLock.writeLock ().lock ();

            V value;

            try {

                value = this.get ( key );
                if ( value != null ) {
                    this.remove ( key );
                    this.put ( key, value );
                }
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public int size () {
            readWriteLock.readLock ().lock ();
            int size = -1;
            try {
                size = super.size ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return size;
        }

        public String toString () {
            readWriteLock.readLock ().lock ();
            String str;
            try {
                str = super.toString ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return str;
        }


    }

    public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
        int cores = Runtime.getRuntime ().availableProcessors ();
        int stripeSize = cores < 2 ? 4 : cores * 2;
        cacheRegions = new CacheMap[ stripeSize ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {

        cacheRegions = new CacheMap[ concurrency ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    private int stripeIndex ( K key ) {
        int hashCode = key.hashCode () * 31;
        return hashCode % ( cacheRegions.length );
    }

    private CacheMap<K, V> map ( K key ) {
        return cacheRegions[ stripeIndex ( key ) ];
    }

    @Override
    public void put ( K key, V value ) {

        map ( key ).put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map ( key ).get ( key );
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        return map ( key ).getSilent ( key );

    }

    @Override
    public void remove ( K key ) {
        map ( key ).remove ( key );
    }

    @Override
    public int size () {
        int size = 0;
        for ( CacheMap<K, V> cache : cacheRegions ) {
            size += cache.size ();
        }
        return size;
    }

    public String toString () {

        StringBuilder builder = new StringBuilder ();
        for ( CacheMap<K, V> cache : cacheRegions ) {
            builder.append ( cache.toString () ).append ( '\n' );
        }

        return builder.toString ();
    }


}

You can see why I cover the non-concurrent version first.您可以理解为什么我首先介绍非并发版本。 The above attempts to create some stripes to reduce lock contention.以上尝试创建一些条带以减少锁争用。 So we it hashes the key and then looks up that hash to find the actual cache.所以我们对键进行散列,然后查找该散列以找到实际的缓存。 This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.这使得限制大小在相当大的误差范围内更像是建议/粗略猜测,具体取决于您的密钥散列算法的传播程度。

Here is the test to show that the concurrent version probably works.这是表明并发版本可能有效的测试。 :) (Test under fire would be the real way). :)(在火下测试将是真正的方式)。

public class SimpleConcurrentLRUCache {


    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleConcurrentCache<> ( 1, 4, false );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );

        puts (cache);
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();

        cache.put ( 8, 8 );
        cache.put ( 9, 9 );

        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        puts (cache);


        if ( !ok ) die ();

    }


    @Test
    public void test2 () {
        LruCache <Integer, Integer> cache = new LruSimpleConcurrentCache<> ( 400, false );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        for (int index =0 ; index < 5_000; index++) {
            cache.get(0);
            cache.get ( 1 );
            cache.put ( 2, index  );
            cache.put ( 3, index );
            cache.put(index, index);
        }

        boolean ok = cache.getSilent ( 0 ) == 0 || die ();
        ok |= cache.getSilent ( 1 ) == 1 || die ();
        ok |= cache.getSilent ( 2 ) != null || die ();
        ok |= cache.getSilent ( 3 ) != null || die ();

        ok |= cache.size () < 600 || die();
        if ( !ok ) die ();



    }

}

This is the last post.. The first post I deleted as it was a LFU not an LRU cache.这是最后一篇文章。我删除的第一篇文章是 LFU 而非 LRU 缓存。

I thought I would give this another go.我想我会再试一次。 I was trying trying to come up with the simplest version of an LRU cache using the standard JDK w/o too much implementation.我试图使用标准的 JDK 想出一个最简单的 LRU 缓存版本,没有太多的实现。

Here is what I came up with.这是我想出的。 My first attempt was a bit of a disaster as I implemented a LFU instead of and LRU, and then I added FIFO, and LRU support to it... and then I realized it was becoming a monster.我的第一次尝试有点失败,因为我实现了 LFU 而不是 LRU,然后我添加了 FIFO 和 LRU 支持......然后我意识到它正在成为一个怪物。 Then I started talking to my buddy John who was barely interested, and then I described at deep length how I implemented an LFU, LRU and FIFO and how you could switch it with a simple ENUM arg, and then I realized that all I really wanted was a simple LRU.然后我开始和我几乎不感兴趣的朋友 John 交谈,然后我详细描述了我如何实现 LFU、LRU 和 FIFO 以及如何使用简单的 ENUM arg 进行切换,然后我意识到我真正想要的是一个简单的 LRU。 So ignore the earlier post from me, and let me know if you want to see an LRU/LFU/FIFO cache that is switchable via an enum... no?所以请忽略我之前的帖子,如果您想查看可通过枚举切换的 LRU/LFU/FIFO 缓存,请告诉我......不是吗? Ok.. here he go.好的..他来了。

The simplest possible LRU using just the JDK.仅使用 JDK 的最简单的 LRU。 I implemented both a concurrent version and a non-concurrent version.我实现了并发版本和非并发版本。

I created a common interface (it is minimalism so likely missing a few features that you would like but it works for my use cases, but let if you would like to see feature XYZ let me know... I live to write code.).我创建了一个通用界面(它是极简主义,因此可能缺少您想要的一些功能,但它适用于我的用例,但是如果您想查看功能 XYZ,请告诉我...我活着就是为了编写代码。) .

public interface LruCache<KEY, VALUE> {
    void put ( KEY key, VALUE value );

    VALUE get ( KEY key );

    VALUE getSilent ( KEY key );

    void remove ( KEY key );

    int size ();
}

You may wonder what getSilent is.您可能想知道getSilent是什么。 I use this for testing.我用这个来测试。 getSilent does not change LRU score of an item. getSilent 不会改变项目的 LRU 分数。

First the non-concurrent one....首先是非并发的......

import java.util.Deque;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;

public class LruCacheNormal<KEY, VALUE> implements LruCache<KEY,VALUE> {

    Map<KEY, VALUE> map = new HashMap<> ();
    Deque<KEY> queue = new LinkedList<> ();
    final int limit;


    public LruCacheNormal ( int limit ) {
        this.limit = limit;
    }

    public void put ( KEY key, VALUE value ) {
        VALUE oldValue = map.put ( key, value );

        /*If there was already an object under this key,
         then remove it before adding to queue
         Frequently used keys will be at the top so the search could be fast.
         */
        if ( oldValue != null ) {
            queue.removeFirstOccurrence ( key );
        }
        queue.addFirst ( key );

        if ( map.size () > limit ) {
            final KEY removedKey = queue.removeLast ();
            map.remove ( removedKey );
        }

    }


    public VALUE get ( KEY key ) {

        /* Frequently used keys will be at the top so the search could be fast.*/
        queue.removeFirstOccurrence ( key );
        queue.addFirst ( key );
        return map.get ( key );
    }


    public VALUE getSilent ( KEY key ) {

        return map.get ( key );
    }

    public void remove ( KEY key ) {

        /* Frequently used keys will be at the top so the search could be fast.*/
        queue.removeFirstOccurrence ( key );
        map.remove ( key );
    }

    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }
}

The queue.removeFirstOccurrence is a potentially expensive operation if you have a large cache.如果你有一个大的缓存, queue.removeFirstOccurrence是一个潜在的昂贵的操作。 One could take LinkedList as an example and add a reverse lookup hash map from element to node to make remove operations A LOT FASTER and more consistent.可以以 LinkedList 为例,添加从元素到节点的反向查找哈希映射,以使删除操作更快且更一致。 I started too, but then realized I don't need it.我也开始了,但后来意识到我不需要它。 But... maybe...但是……也许……

When put is called, the key gets added to the queue.put被调用时,键被添加到队列中。 When get is called, the key gets removed and re-added to the top of the queue.get被调用时,键被删除并重新添加到队列的顶部。

If your cache is small and the building an item is expensive then this should be a good cache.如果您的缓存很小并且构建一个项目很昂贵,那么这应该是一个很好的缓存。 If your cache is really large, then the linear search could be a bottle neck especially if you don't have hot areas of cache.如果您的缓存非常大,那么线性搜索可能会成为瓶颈,特别是如果您没有缓存的热点区域。 The more intense the hot spots, the faster the linear search as hot items are always at the top of the linear search.热点越强烈,线性搜索越快,因为热点总是在线性搜索的顶部。 Anyway... what is needed for this to go faster is write another LinkedList that has a remove operation that has reverse element to node lookup for remove, then removing would be about as fast as removing a key from a hash map.无论如何......要让它更快,需要编写另一个具有删除操作的LinkedList,该操作具有反向元素到节点查找以进行删除,然后删除将与从哈希映射中删除键一样快。

If you have a cache under 1,000 items, this should work out fine.如果您的缓存少于 1,000 个项目,这应该可以正常工作。

Here is a simple test to show its operations in action.这是一个简单的测试,用于展示其实际操作。

public class LruCacheTest {

    @Test
    public void test () {
        LruCache<Integer, Integer> cache = new LruCacheNormal<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 0 ) == 0 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 0 ) == null || die ();
        ok |= cache.getSilent ( 1 ) == null || die ();
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();

        if ( !ok ) die ();

    }
}

The last LRU cache was single threaded, and please don't wrap it in a synchronized anything....最后一个 LRU 缓存是单线程的,请不要将它包装在同步的任何东西中....

Here is a stab at a concurrent version.这是对并发版本的尝试。

import java.util.Deque;
import java.util.LinkedList;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;

public class ConcurrentLruCache<KEY, VALUE> implements LruCache<KEY,VALUE> {

    private final ReentrantLock lock = new ReentrantLock ();


    private final Map<KEY, VALUE> map = new ConcurrentHashMap<> ();
    private final Deque<KEY> queue = new LinkedList<> ();
    private final int limit;


    public ConcurrentLruCache ( int limit ) {
        this.limit = limit;
    }

    @Override
    public void put ( KEY key, VALUE value ) {
        VALUE oldValue = map.put ( key, value );
        if ( oldValue != null ) {
            removeThenAddKey ( key );
        } else {
            addKey ( key );
        }
        if (map.size () > limit) {
            map.remove ( removeLast() );
        }
    }


    @Override
    public VALUE get ( KEY key ) {
        removeThenAddKey ( key );
        return map.get ( key );
    }


    private void addKey(KEY key) {
        lock.lock ();
        try {
            queue.addFirst ( key );
        } finally {
            lock.unlock ();
        }


    }

    private KEY removeLast( ) {
        lock.lock ();
        try {
            final KEY removedKey = queue.removeLast ();
            return removedKey;
        } finally {
            lock.unlock ();
        }
    }

    private void removeThenAddKey(KEY key) {
        lock.lock ();
        try {
            queue.removeFirstOccurrence ( key );
            queue.addFirst ( key );
        } finally {
            lock.unlock ();
        }

    }

    private void removeFirstOccurrence(KEY key) {
        lock.lock ();
        try {
            queue.removeFirstOccurrence ( key );
        } finally {
            lock.unlock ();
        }

    }


    @Override
    public VALUE getSilent ( KEY key ) {
        return map.get ( key );
    }

    @Override
    public void remove ( KEY key ) {
        removeFirstOccurrence ( key );
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString () {
        return map.toString ();
    }
}

The main differences are the use of the ConcurrentHashMap instead of HashMap, and the use of the Lock (I could have gotten away with synchronized, but...).主要区别是使用 ConcurrentHashMap 而不是 HashMap,以及使用锁(我本来可以使用同步的,但是......)。

I have not tested it under fire, but it seems like a simple LRU cache that might work out in 80% of use cases where you need a simple LRU map.我还没有对其进行测试,但它似乎是一个简单的 LRU 缓存,可以在 80% 的用例中使用,您需要一个简单的 LRU 映射。

I welcome feedback, except the why don't you use library a, b, or c.我欢迎反馈,除了你为什么不使用库 a、b 或 c。 The reason I don't always use a library is because I don't always want every war file to be 80MB, and I write libraries so I tend to make the libs plug-able with a good enough solution in place and someone can plug-in another cache provider if they like.我不总是使用库的原因是因为我并不总是希望每个 war 文件都是 80MB,而且我编写了库,所以我倾向于使用足够好的解决方案使库可插入,并且有人可以插入- 如果他们愿意,可以在另一个缓存提供程序中。 :) I never know when someone might need Guava or ehcache or something else I don't want to include them, but if I make caching plug-able, I will not exclude them either. :) 我不知道什么时候有人可能需要 Guava 或 ehcache 或其他我不想包含它们的东西,但是如果我使缓存可插入,我也不会排除它们。

Reduction of dependencies has its own reward.减少依赖有其自身的回报。 I love to get some feedback on how to make this even simpler or faster or both.我喜欢得到一些关于如何使这更简单或更快或两者兼而有之的反馈。

Also if anyone knows of a ready to go....另外,如果有人知道准备好了......

Ok.. I know what you are thinking... Why doesn't he just use removeEldest entry from LinkedHashMap, and well I should but.... but.. but.. That would be a FIFO not an LRU and we were trying to implement a LRU.好吧..我知道你在想什么......他为什么不直接使用LinkedHashMap中的removeEldest条目,我应该但是......但是......但是......那将是一个FIFO而不是LRU,我们是尝试实现 LRU。

    Map<KEY, VALUE> map = new LinkedHashMap<KEY, VALUE> () {

        @Override
        protected boolean removeEldestEntry ( Map.Entry<KEY, VALUE> eldest ) {
            return this.size () > limit;
        }
    };

This test fails for the above code...对于上述代码,此测试失败...

        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();

So here is a quick and dirty FIFO cache using removeEldestEntry.所以这里是一个使用 removeEldestEntry 的快速且脏的 FIFO 缓存。

import java.util.*;

public class FifoCache<KEY, VALUE> implements LruCache<KEY,VALUE> {

    final int limit;

    Map<KEY, VALUE> map = new LinkedHashMap<KEY, VALUE> () {

        @Override
        protected boolean removeEldestEntry ( Map.Entry<KEY, VALUE> eldest ) {
            return this.size () > limit;
        }
    };


    public LruCacheNormal ( int limit ) {
        this.limit = limit;
    }

    public void put ( KEY key, VALUE value ) {
         map.put ( key, value );


    }


    public VALUE get ( KEY key ) {

        return map.get ( key );
    }


    public VALUE getSilent ( KEY key ) {

        return map.get ( key );
    }

    public void remove ( KEY key ) {
        map.remove ( key );
    }

    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }
}

FIFOs are fast. FIFO 速度很快。 No searching around.没有四处寻找。 You could front a FIFO in front of an LRU and that would handle most hot entries quite nicely.您可以在 LRU 前面放置一个 FIFO,这样可以很好地处理大多数热条目。 A better LRU is going to need that reverse element to Node feature.更好的 LRU 将需要与 Node 功能相反的元素。

Anyway... now that I wrote some code, let me go through the other answers and see what I missed... the first time I scanned them.无论如何......现在我写了一些代码,让我通过其他答案看看我错过了什么......我第一次扫描它们。

LinkedHashMap is O(1), but requires synchronization. LinkedHashMap是 O(1),但需要同步。 No need to reinvent the wheel there.无需在那里重新发明轮子。

2 options for increasing concurrency:增加并发性的 2 个选项:

1. Create multiple LinkedHashMap , and hash into them: example: LinkedHashMap[4], index 0, 1, 2, 3 . 1. 创建多个LinkedHashMap ,并对其进行哈希处理:例如: LinkedHashMap[4], index 0, 1, 2, 3 On the key do key%4 (or binary OR on [key, 3] ) to pick which map to do a put/get/remove.在键上执行key%4 (或[key, 3]上的binary OR )来选择要执行放置/获取/删除的映射。

2. You could do an 'almost' LRU by extending ConcurrentHashMap , and having a linked hash map like structure in each of the regions inside of it. 2. 您可以通过扩展ConcurrentHashMap并在其内部的每个区域中具有类似结构的链接散列映射来执行“几乎”LRU。 Locking would occur more granularly than a LinkedHashMap that is synchronized.锁定会比同步的LinkedHashMap更精细地发生。 On a put or putIfAbsent only a lock on the head and tail of the list is needed (per region).putputIfAbsent只需要锁定列表的头部和尾部(每个区域)。 On a remove or get the whole region needs to be locked.在删除或获取时需要锁定整个区域。 I'm curious if Atomic linked lists of some sort might help here -- probably so for the head of the list.我很好奇某种原子链表是否可以在这里有所帮助——对于列表的头部来说可能是这样。 Maybe for more.也许更多。

The structure would not keep the total order, but only the order per region.该结构不会保留总顺序,而只会保留每个区域的顺序。 As long as the number of entries is much larger than the number of regions, this is good enough for most caches.只要条目数远大于区域数,这对于大多数缓存来说就足够了。 Each region will have to have its own entry count, this would be used rather than the global count for the eviction trigger.每个区域都必须有自己的条目计数,这将用于驱逐触发器的全局计数。 The default number of regions in a ConcurrentHashMap is 16, which is plenty for most servers today. ConcurrentHashMap的默认区域数是 16,这对于今天的大多数服务器来说已经足够了。

  1. would be easier to write and faster under moderate concurrency.在中等并发下会更容易编写和更快。

  2. would be more difficult to write but scale much better at very high concurrency.编写起来会更困难,但在非常高的并发性下可扩展性更好。 It would be slower for normal access (just as ConcurrentHashMap is slower than HashMap where there is no concurrency)正常访问会更慢(就像ConcurrentHashMap在没有并发的情况下比HashMap慢一样)

There are two open source implementations.有两种开源实现。

Apache Solr has ConcurrentLRUCache: https://lucene.apache.org/solr/3_6_1/org/apache/solr/util/ConcurrentLRUCache.html Apache Solr 有 ConcurrentLRUCache: https : //lucene.apache.org/solr/3_6_1/org/apache/solr/util/ConcurrentLRUCache.html

There's an open source project for a ConcurrentLinkedHashMap: http://code.google.com/p/concurrentlinkedhashmap/ ConcurrentLinkedHashMap 有一个开源项目: http : //code.google.com/p/concurrentlinkedhashmap/

I would consider using java.util.concurrent.PriorityBlockingQueue , with priority determined by a "numberOfUses" counter in each element.我会考虑使用java.util.concurrent.PriorityBlockingQueue ,优先级由每个元素中的“numberOfUses”计数器确定。 I would be very, very careful to get all my synchronisation correct, as the "numberOfUses" counter implies that the element can't be immutable.我会非常非常小心地使所有同步正确,因为“numberOfUses”计数器暗示该元素不能是不可变的。

The element object would be a wrapper for the objects in the cache:元素对象将是缓存中对象的包装器:

class CacheElement {
    private final Object obj;
    private int numberOfUsers = 0;

    CacheElement(Object obj) {
        this.obj = obj;
    }

    ... etc.
}

Hope this helps .希望这可以帮助 。

import java.util.*;
public class Lru {

public static <K,V> Map<K,V> lruCache(final int maxSize) {
    return new LinkedHashMap<K, V>(maxSize*4/3, 0.75f, true) {

        private static final long serialVersionUID = -3588047435434569014L;

        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    };
 }
 public static void main(String[] args ) {
    Map<Object, Object> lru = Lru.lruCache(2);      
    lru.put("1", "1");
    lru.put("2", "2");
    lru.put("3", "3");
    System.out.println(lru);
}
}

LRU Cache can be implemented using a ConcurrentLinkedQueue and a ConcurrentHashMap which can be used in multithreading scenario as well. LRU Cache 可以使用 ConcurrentLinkedQueue 和 ConcurrentHashMap 来实现,它们也可以用于多线程场景。 The head of the queue is that element that has been on the queue the longest time.队列的头部是在队列中停留时间最长的那个元素。 The tail of the queue is that element that has been on the queue the shortest time.队列的尾部是在队列中停留时间最短的那个元素。 When an element exists in the Map, we can remove it from the LinkedQueue and insert it at the tail.当 Map 中存在元素时,我们可以将其从 LinkedQueue 中移除并插入到尾部。

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

public class LRUCache<K,V> {
  private ConcurrentHashMap<K,V> map;
  private ConcurrentLinkedQueue<K> queue;
  private final int size; 

  public LRUCache(int size) {
    this.size = size;
    map = new ConcurrentHashMap<K,V>(size);
    queue = new ConcurrentLinkedQueue<K>();
  }

  public V get(K key) {
    //Recently accessed, hence move it to the tail
    queue.remove(key);
    queue.add(key);
    return map.get(key);
  }

  public void put(K key, V value) {
    //ConcurrentHashMap doesn't allow null key or values
    if(key == null || value == null) throw new NullPointerException();
    if(map.containsKey(key) {
      queue.remove(key);
    }
    if(queue.size() >= size) {
      K lruKey = queue.poll();
      if(lruKey != null) {
        map.remove(lruKey);
      }
    }
    queue.add(key);
    map.put(key,value);
  }

}

Here is my implementation for LRU.这是我对 LRU 的实现。 I have used PriorityQueue, which basically works as FIFO and not threadsafe.我使用了 PriorityQueue,它基本上用作 FIFO 而不是线程安全的。 Used Comparator based on the page time creation and based on the performs the ordering of the pages for the least recently used time.使用的比较器基于页面时间的创建和基于最近最少使用时间的页面排序。

Pages for consideration : 2, 1, 0, 2, 8, 2, 4供考虑的页数 : 2, 1, 0, 2, 8, 2, 4

Page added into cache is : 2添加到缓存中的页面是:2
Page added into cache is : 1添加到缓存中的页面是:1
Page added into cache is : 0添加到缓存中的页面是:0
Page: 2 already exisit in cache.页:2 已存在于缓存中。 Last accessed time updated上次访问时间已更新
Page Fault, PAGE: 1, Replaced with PAGE: 8 Page Fault, PAGE: 1, 替换为 PAGE: 8
Page added into cache is : 8添加到缓存中的页面是:8
Page: 2 already exisit in cache.页:2 已存在于缓存中。 Last accessed time updated上次访问时间已更新
Page Fault, PAGE: 0, Replaced with PAGE: 4 Page Fault, PAGE: 0, 替换为 PAGE: 4
Page added into cache is : 4添加到缓存中的页面是:4

OUTPUT输出

LRUCache Pages LRUCache 页面
------------- -------------
PageName: 8, PageCreationTime: 1365957019974页面名称:8,页面创建时间:1365957019974
PageName: 2, PageCreationTime: 1365957020074页面名称:2,页面创建时间:1365957020074
PageName: 4, PageCreationTime: 1365957020174页面名称:4,页面创建时间:1365957020174

enter code here在此处输入代码

import java.util.Comparator;
import java.util.Iterator;
import java.util.PriorityQueue;


public class LRUForCache {
    private PriorityQueue<LRUPage> priorityQueue = new PriorityQueue<LRUPage>(3, new LRUPageComparator());
    public static void main(String[] args) throws InterruptedException {

        System.out.println(" Pages for consideration : 2, 1, 0, 2, 8, 2, 4");
        System.out.println("----------------------------------------------\n");

        LRUForCache cache = new LRUForCache();
        cache.addPageToQueue(new LRUPage("2"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("1"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("0"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("2"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("8"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("2"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("4"));
        Thread.sleep(100);

        System.out.println("\nLRUCache Pages");
        System.out.println("-------------");
        cache.displayPriorityQueue();
    }


    public synchronized void  addPageToQueue(LRUPage page){
        boolean pageExists = false;
        if(priorityQueue.size() == 3){
            Iterator<LRUPage> iterator = priorityQueue.iterator();

            while(iterator.hasNext()){
                LRUPage next = iterator.next();
                if(next.getPageName().equals(page.getPageName())){
                    /* wanted to just change the time, so that no need to poll and add again.
                       but elements ordering does not happen, it happens only at the time of adding
                       to the queue

                       In case somebody finds it, plz let me know.
                     */
                    //next.setPageCreationTime(page.getPageCreationTime()); 

                    priorityQueue.remove(next);
                    System.out.println("Page: " + page.getPageName() + " already exisit in cache. Last accessed time updated");
                    pageExists = true;
                    break;
                }
            }
            if(!pageExists){
                // enable it for printing the queue elemnts
                //System.out.println(priorityQueue);
                LRUPage poll = priorityQueue.poll();
                System.out.println("Page Fault, PAGE: " + poll.getPageName()+", Replaced with PAGE: "+page.getPageName());

            }
        }
        if(!pageExists){
            System.out.println("Page added into cache is : " + page.getPageName());
        }
        priorityQueue.add(page);

    }

    public void displayPriorityQueue(){
        Iterator<LRUPage> iterator = priorityQueue.iterator();
        while(iterator.hasNext()){
            LRUPage next = iterator.next();
            System.out.println(next);
        }
    }
}

class LRUPage{
    private String pageName;
    private long pageCreationTime;
    public LRUPage(String pagename){
        this.pageName = pagename;
        this.pageCreationTime = System.currentTimeMillis();
    }

    public String getPageName() {
        return pageName;
    }

    public long getPageCreationTime() {
        return pageCreationTime;
    }

    public void setPageCreationTime(long pageCreationTime) {
        this.pageCreationTime = pageCreationTime;
    }

    @Override
    public boolean equals(Object obj) {
        LRUPage page = (LRUPage)obj; 
        if(pageCreationTime == page.pageCreationTime){
            return true;
        }
        return false;
    }

    @Override
    public int hashCode() {
        return (int) (31 * pageCreationTime);
    }

    @Override
    public String toString() {
        return "PageName: " + pageName +", PageCreationTime: "+pageCreationTime;
    }
}


class LRUPageComparator implements Comparator<LRUPage>{

    @Override
    public int compare(LRUPage o1, LRUPage o2) {
        if(o1.getPageCreationTime() > o2.getPageCreationTime()){
            return 1;
        }
        if(o1.getPageCreationTime() < o2.getPageCreationTime()){
            return -1;
        }
        return 0;
    }
}

This is the LRU cache I use, which encapsulates a LinkedHashMap and handles concurrency with a simple synchronize lock guarding the juicy spots.这是我使用的 LRU 缓存,它封装了一个 LinkedHashMap 并使用一个简单的同步锁来处理并发,以保护多汁点。 It "touches" elements as they are used so that they become the "freshest" element again, so that it is actually LRU.它在使用时“接触”元素,使它们再次成为“最新鲜”的元素,因此它实际上是 LRU。 I also had the requirement of my elements having a minimum lifespan, which you can also think of as "maximum idle time" permitted, then you're up for eviction.我还要求我的元素具有最短寿命,您也可以将其视为允许的“最长空闲时间”,然后您就可以驱逐了。

However, I agree with Hank's conclusion and accepted answer -- if I were starting this again today, I'd check out Guava's CacheBuilder .但是,我同意 Hank 的结论并接受了答案——如果我今天再次开始,我会查看 Guava 的CacheBuilder

import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;


public class MaxIdleLRUCache<KK, VV> {

    final static private int IDEAL_MAX_CACHE_ENTRIES = 128;

    public interface DeadElementCallback<KK, VV> {
        public void notify(KK key, VV element);
    }

    private Object lock = new Object();
    private long minAge;
    private HashMap<KK, Item<VV>> cache;


    public MaxIdleLRUCache(long minAgeMilliseconds) {
        this(minAgeMilliseconds, IDEAL_MAX_CACHE_ENTRIES);
    }

    public MaxIdleLRUCache(long minAgeMilliseconds, int idealMaxCacheEntries) {
        this(minAgeMilliseconds, idealMaxCacheEntries, null);
    }

    public MaxIdleLRUCache(long minAgeMilliseconds, int idealMaxCacheEntries, final DeadElementCallback<KK, VV> callback) {
        this.minAge = minAgeMilliseconds;
        this.cache = new LinkedHashMap<KK, Item<VV>>(IDEAL_MAX_CACHE_ENTRIES + 1, .75F, true) {
            private static final long serialVersionUID = 1L;

            // This method is called just after a new entry has been added
            public boolean removeEldestEntry(Map.Entry<KK, Item<VV>> eldest) {
                // let's see if the oldest entry is old enough to be deleted. We don't actually care about the cache size.
                long age = System.currentTimeMillis() - eldest.getValue().birth;
                if (age > MaxIdleLRUCache.this.minAge) {
                    if ( callback != null ) {
                        callback.notify(eldest.getKey(), eldest.getValue().payload);
                    }
                    return true; // remove it
                }
                return false; // don't remove this element
            }
        };

    }

    public void put(KK key, VV value) {
        synchronized ( lock ) {
//          System.out.println("put->"+key+","+value);
            cache.put(key, new Item<VV>(value));
        }
    }

    public VV get(KK key) {
        synchronized ( lock ) {
//          System.out.println("get->"+key);
            Item<VV> item = getItem(key);
            return item == null ? null : item.payload;
        }
    }

    public VV remove(String key) {
        synchronized ( lock ) {
//          System.out.println("remove->"+key);
            Item<VV> item =  cache.remove(key);
            if ( item != null ) {
                return item.payload;
            } else {
                return null;
            }
        }
    }

    public int size() {
        synchronized ( lock ) {
            return cache.size();
        }
    }

    private Item<VV> getItem(KK key) {
        Item<VV> item = cache.get(key);
        if (item == null) {
            return null;
        }
        item.touch(); // idle the item to reset the timeout threshold
        return item;
    }

    private static class Item<T> {
        long birth;
        T payload;

        Item(T payload) {
            this.birth = System.currentTimeMillis();
            this.payload = payload;
        }

        public void touch() {
            this.birth = System.currentTimeMillis();
        }
    }

}

Well for a cache you will generally be looking up some piece of data via a proxy object, (a URL, String....) so interface-wise you are going to want a map.好吧,对于缓存,您通常会通过代理对象(URL、字符串...)查找一些数据,因此在界面方面您将需要地图。 but to kick things out you want a queue like structure.但是要把事情踢出去,你需要一个类似结构的队列。 Internally I would maintain two data structures, a Priority-Queue and a HashMap.在内部,我会维护两个数据结构,一个 Priority-Queue 和一个 HashMap。 heres an implementation that should be able to do everything in O(1) time.这是一个应该能够在 O(1) 时间内完成所有事情的实现。

Here's a class I whipped up pretty quick:这是我很快就想到的一门课:

import java.util.HashMap;
import java.util.Map;
public class LRUCache<K, V>
{
    int maxSize;
    int currentSize = 0;

    Map<K, ValueHolder<K, V>> map;
    LinkedList<K> queue;

    public LRUCache(int maxSize)
    {
        this.maxSize = maxSize;
        map = new HashMap<K, ValueHolder<K, V>>();
        queue = new LinkedList<K>();
    }

    private void freeSpace()
    {
        K k = queue.remove();
        map.remove(k);
        currentSize--;
    }

    public void put(K key, V val)
    {
        while(currentSize >= maxSize)
        {
            freeSpace();
        }
        if(map.containsKey(key))
        {//just heat up that item
            get(key);
            return;
        }
        ListNode<K> ln = queue.add(key);
        ValueHolder<K, V> rv = new ValueHolder<K, V>(val, ln);
        map.put(key, rv);       
        currentSize++;
    }

    public V get(K key)
    {
        ValueHolder<K, V> rv = map.get(key);
        if(rv == null) return null;
        queue.remove(rv.queueLocation);
        rv.queueLocation = queue.add(key);//this ensures that each item has only one copy of the key in the queue
        return rv.value;
    }
}

class ListNode<K>
{
    ListNode<K> prev;
    ListNode<K> next;
    K value;
    public ListNode(K v)
    {
        value = v;
        prev = null;
        next = null;
    }
}

class ValueHolder<K,V>
{
    V value;
    ListNode<K> queueLocation;
    public ValueHolder(V value, ListNode<K> ql)
    {
        this.value = value;
        this.queueLocation = ql;
    }
}

class LinkedList<K>
{
    ListNode<K> head = null;
    ListNode<K> tail = null;

    public ListNode<K> add(K v)
    {
        if(head == null)
        {
            assert(tail == null);
            head = tail = new ListNode<K>(v);
        }
        else
        {
            tail.next = new ListNode<K>(v);
            tail.next.prev = tail;
            tail = tail.next;
            if(tail.prev == null)
            {
                tail.prev = head;
                head.next = tail;
            }
        }
        return tail;
    }

    public K remove()
    {
        if(head == null)
            return null;
        K val = head.value;
        if(head.next == null)
        {
            head = null;
            tail = null;
        }
        else
        {
            head = head.next;
            head.prev = null;
        }
        return val;
    }

    public void remove(ListNode<K> ln)
    {
        ListNode<K> prev = ln.prev;
        ListNode<K> next = ln.next;
        if(prev == null)
        {
            head = next;
        }
        else
        {
            prev.next = next;
        }
        if(next == null)
        {
            tail = prev;
        }
        else
        {
            next.prev = prev;
        }       
    }
}

Here's how it works.这是它的工作原理。 Keys are stored in a linked list with the oldest keys in the front of the list (new keys go to the back) so when you need to 'eject' something you just pop it off the front of the queue and then use the key to remove the value from the map.键存储在一个链表中,最旧的键在列表的前面(新键在后面),所以当你需要“弹出”某些东西时,你只需将它从队列的前面弹出,然后使用键来从地图中删除该值。 When an item gets referenced you grab the ValueHolder from the map and then use the queuelocation variable to remove the key from its current location in the queue and then put it at the back of the queue (its now the most recently used).当一个项目被引用时,你从地图中获取 ValueHolder,然后使用 queuelocation 变量从队列中的当前位置删除键,然后将它放在队列的后面(它现在是最近使用的)。 Adding things is pretty much the same.添加东西几乎是一样的。

I'm sure theres a ton of errors here and I haven't implemented any synchronization.我确定这里有很多错误,我还没有实现任何同步。 but this class will provide O(1) adding to the cache, O(1) removal of old items, and O(1) retrieval of cache items.但是这个类将提供 O(1) 添加到缓存、O(1) 删除旧项目和 O(1) 检索缓存项目。 Even a trivial synchronization (just synchronize every public method) would still have little lock contention due to the run time.由于运行时的原因,即使是微不足道的同步(只是同步每个公共方法)仍然很少有锁争用。 If anyone has any clever synchronization tricks I would be very interested.如果有人有任何巧妙的同步技巧,我会非常感兴趣。 Also, I'm sure there are some additional optimizations that you could implement using the maxsize variable with respect to the map.此外,我确信您可以使用 maxsize 变量对地图进行一些额外的优化。

Here is my tested best performing concurrent LRU cache implementation without any synchronized block:这是我测试过的最好的并发 LRU 缓存实现,没有任何同步块:

public class ConcurrentLRUCache<Key, Value> {

private final int maxSize;

private ConcurrentHashMap<Key, Value> map;
private ConcurrentLinkedQueue<Key> queue;

public ConcurrentLRUCache(final int maxSize) {
    this.maxSize = maxSize;
    map = new ConcurrentHashMap<Key, Value>(maxSize);
    queue = new ConcurrentLinkedQueue<Key>();
}

/**
 * @param key - may not be null!
 * @param value - may not be null!
 */
public void put(final Key key, final Value value) {
    if (map.containsKey(key)) {
        queue.remove(key); // remove the key from the FIFO queue
    }

    while (queue.size() >= maxSize) {
        Key oldestKey = queue.poll();
        if (null != oldestKey) {
            map.remove(oldestKey);
        }
    }
    queue.add(key);
    map.put(key, value);
}

/**
 * @param key - may not be null!
 * @return the value associated to the given key or null
 */
public Value get(final Key key) {
    return map.get(key);
}

} }

Have a look at ConcurrentSkipListMap .看看ConcurrentSkipListMap It should give you log(n) time for testing and removing an element if it is already contained in the cache, and constant time for re-adding it.如果它已经包含在缓存中,它应该给你 log(n) 时间来测试和删除一个元素,以及重新添加它的恒定时间。

You'd just need some counter etc and wrapper element to force ordering of the LRU order and ensure recent stuff is discarded when the cache is full.您只需要一些计数器等和包装元素来强制对 LRU 顺序进行排序,并确保在缓存已满时丢弃最近的内容。

Here is my short implementation, please criticize or improve it!这是我的简短实现,请批评或改进它!

package util.collection;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

/**
 * Limited size concurrent cache map implementation.<br/>
 * LRU: Least Recently Used.<br/>
 * If you add a new key-value pair to this cache after the maximum size has been exceeded,
 * the oldest key-value pair will be removed before adding.
 */

public class ConcurrentLRUCache<Key, Value> {

private final int maxSize;
private int currentSize = 0;

private ConcurrentHashMap<Key, Value> map;
private ConcurrentLinkedQueue<Key> queue;

public ConcurrentLRUCache(final int maxSize) {
    this.maxSize = maxSize;
    map = new ConcurrentHashMap<Key, Value>(maxSize);
    queue = new ConcurrentLinkedQueue<Key>();
}

private synchronized void freeSpace() {
    Key key = queue.poll();
    if (null != key) {
        map.remove(key);
        currentSize = map.size();
    }
}

public void put(Key key, Value val) {
    if (map.containsKey(key)) {// just heat up that item
        put(key, val);
        return;
    }
    while (currentSize >= maxSize) {
        freeSpace();
    }
    synchronized(this) {
        queue.add(key);
        map.put(key, val);
        currentSize++;
    }
}

public Value get(Key key) {
    return map.get(key);
}
}

Best way to achieve is to use a LinkedHashMap that maintains insertion order of elements.实现的最佳方法是使用 LinkedHashMap 来维护元素的插入顺序。 Following is an sample code:下面是一个示例代码:

public class Solution {

Map<Integer,Integer> cache;
int capacity;
public Solution(int capacity) {
    this.cache = new LinkedHashMap<Integer,Integer>(capacity); 
    this.capacity = capacity;

}

// This function returns false if key is not 
// present in cache. Else it moves the key to 
// front by first removing it and then adding 
// it, and returns true. 

public int get(int key) {
if (!cache.containsKey(key)) 
        return -1; 
    int value = cache.get(key);
    cache.remove(key); 
    cache.put(key,value); 
    return cache.get(key); 

}

public void set(int key, int value) {

    // If already present, then  
    // remove it first we are going to add later 
       if(cache.containsKey(key)){
        cache.remove(key);
    }
     // If cache size is full, remove the least 
    // recently used. 
    else if (cache.size() == capacity) { 
        Iterator<Integer> iterator = cache.keySet().iterator();
        cache.remove(iterator.next()); 
    }
        cache.put(key,value);
}

} }

Here's my own implementation to this problem这是我自己对这个问题的实现

simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. simplelrucache 提供线程安全的、非常简单的、非分布式 LRU 缓存,支持 TTL。 It provides two implementations:它提供了两种实现:

  • Concurrent based on ConcurrentLinkedHashMap基于 ConcurrentLinkedHashMap 的并发
  • Synchronized based on LinkedHashMap基于LinkedHashMap同步

You can find it here: http://code.google.com/p/simplelrucache/你可以在这里找到它: http : //code.google.com/p/simplelrucache/

Wanted to add comment to the answer given by Hank but some how I am not able to - please treat it as comment想对汉克给出的答案添加评论,但有些我无法做到 - 请将其视为评论

LinkedHashMap maintains access order as well based on parameter passed in its constructor It keeps doubly lined list to maintain order (See LinkedHashMap.Entry) LinkedHashMap 也根据在其构造函数中传递的参数来维护访问顺序它保持双线列表来维护顺序(请参阅 LinkedHashMap.Entry)

@Pacerier it is correct that LinkedHashMap keeps same order while iteration if element is added again but that is only in case of insertion order mode. @Pacerier 如果再次添加元素,LinkedHashMap 在迭代时保持相同的顺序是正确的,但这仅在插入顺序模式的情况下。

this is what I found in java docs of LinkedHashMap.Entry object这是我在 LinkedHashMap.Entry 对象的 java 文档中发现的

    /**
     * This method is invoked by the superclass whenever the value
     * of a pre-existing entry is read by Map.get or modified by Map.set.
     * If the enclosing Map is access-ordered, it moves the entry
     * to the end of the list; otherwise, it does nothing.
     */
    void recordAccess(HashMap<K,V> m) {
        LinkedHashMap<K,V> lm = (LinkedHashMap<K,V>)m;
        if (lm.accessOrder) {
            lm.modCount++;
            remove();
            addBefore(lm.header);
        }
    }

this method takes care of moving recently accessed element to end of the list.此方法负责将最近访问的元素移动到列表的末尾。 So all in all LinkedHashMap is best data structure for implementing LRUCache.所以总而言之,LinkedHashMap 是实现 LRUCache 的最佳数据结构。

Another thought and even a simple implementation using LinkedHashMap collection of Java.另一个想法,甚至是使用 Java 的 LinkedHashMap 集合的简单实现。

LinkedHashMap provided method removeEldestEntry and which can be overridden in the way mentioned in example. LinkedHashMap 提供了 removeEldestEntry 方法,可以以示例中提到的方式覆盖该方法。 By default implementation of this collection structure is false.默认情况下,此集合结构的实现为 false。 If its true and size of this structure goes beyond the initial capacity than eldest or older elements will be removed.如果此结构的真实性和大小超出了初始容量,则将删除最旧或较旧的元素。

We can have a pageno and page content in my case pageno is integer and pagecontent i have kept page number values string.在我的情况下,我们可以有一个 pageno 和页面内容 pageno 是整数,而 pagecontent 我保留了页码值字符串。

 import java.util.LinkedHashMap; import java.util.Map; /** * @author Deepak Singhvi * */ public class LRUCacheUsingLinkedHashMap { private static int CACHE_SIZE = 3; public static void main(String[] args) { System.out.println(" Pages for consideration : 2, 1, 0, 2, 8, 2, 4,99"); System.out.println("----------------------------------------------\\n"); // accessOrder is true, so whenever any page gets changed or accessed, // its order will change in the map, LinkedHashMap<Integer,String> lruCache = new LinkedHashMap<Integer,String>(CACHE_SIZE, .75F, true) { private static final long serialVersionUID = 1L; protected boolean removeEldestEntry(Map.Entry<Integer,String> eldest) { return size() > CACHE_SIZE; } }; lruCache.put(2, "2"); lruCache.put(1, "1"); lruCache.put(0, "0"); System.out.println(lruCache + " , After first 3 pages in cache"); lruCache.put(2, "2"); System.out.println(lruCache + " , Page 2 became the latest page in the cache"); lruCache.put(8, "8"); System.out.println(lruCache + " , Adding page 8, which removes eldest element 2 "); lruCache.put(2, "2"); System.out.println(lruCache+ " , Page 2 became the latest page in the cache"); lruCache.put(4, "4"); System.out.println(lruCache+ " , Adding page 4, which removes eldest element 1 "); lruCache.put(99, "99"); System.out.println(lruCache + " , Adding page 99, which removes eldest element 8 "); } }

Result of above code execution is as follows:以上代码执行结果如下:

 Pages for consideration : 2, 1, 0, 2, 8, 2, 4,99
--------------------------------------------------
    {2=2, 1=1, 0=0}  , After first 3 pages in cache
    {2=2, 1=1, 0=0}  , Page 2 became the latest page in the cache
    {1=1, 0=0, 8=8}  , Adding page 8, which removes eldest element 2 
    {0=0, 8=8, 2=2}  , Page 2 became the latest page in the cache
    {8=8, 2=2, 4=4}  , Adding page 4, which removes eldest element 1 
    {2=2, 4=4, 99=99} , Adding page 99, which removes eldest element 8 

I'm looking for a better LRU cache using Java code.我正在寻找使用 Java 代码的更好的 LRU 缓存。 Is it possible for you to share your Java LRU cache code using LinkedHashMap and Collections#synchronizedMap ?您是否可以使用LinkedHashMapCollections#synchronizedMap共享您的 Java LRU 缓存代码? Currently I'm using LRUMap implements Map and the code works fine, but I'm getting ArrayIndexOutofBoundException on load testing using 500 users on the below method.目前我正在使用LRUMap implements Map并且代码工作正常,但是我在使用以下方法的 500 个用户进行负载测试时得到ArrayIndexOutofBoundException The method moves the recent object to front of the queue.该方法将最近的对象移动到队列的前面。

private void moveToFront(int index) {
        if (listHead != index) {
            int thisNext = nextElement[index];
            int thisPrev = prevElement[index];
            nextElement[thisPrev] = thisNext;
            if (thisNext >= 0) {
                prevElement[thisNext] = thisPrev;
            } else {
                listTail = thisPrev;
            }
            //old listHead and new listHead say new is 1 and old was 0 then prev[1]= 1 is the head now so no previ so -1
            // prev[0 old head] = new head right ; next[new head] = old head
            prevElement[index] = -1;
            nextElement[index] = listHead;
            prevElement[listHead] = index;
            listHead = index;
        }
    }

get(Object key) and put(Object key, Object value) method calls the above moveToFront method. get(Object key)put(Object key, Object value)方法调用了上面的moveToFront方法。

Following the @sanjanab concept (but after fixes) I made my version of the LRUCache providing also the Consumer that allows to do something with the removed items if needed.遵循@sanjanab 概念(但在修复之后),我制作了我的 LRUCache 版本,还提供了消费者,允许在需要时对已删除的项目执行某些操作。

public class LRUCache<K, V> {

    private ConcurrentHashMap<K, V> map;
    private final Consumer<V> onRemove;
    private ConcurrentLinkedQueue<K> queue;
    private final int size;

    public LRUCache(int size, Consumer<V> onRemove) {
        this.size = size;
        this.onRemove = onRemove;
        this.map = new ConcurrentHashMap<>(size);
        this.queue = new ConcurrentLinkedQueue<>();
    }

    public V get(K key) {
        //Recently accessed, hence move it to the tail
        if (queue.remove(key)) {
            queue.add(key);
            return map.get(key);
        }
        return null;
    }

    public void put(K key, V value) {
        //ConcurrentHashMap doesn't allow null key or values
        if (key == null || value == null) throw new IllegalArgumentException("key and value cannot be null!");

        V existing = map.get(key);
        if (existing != null) {
            queue.remove(key);
            onRemove.accept(existing);
        }

        if (map.size() >= size) {
            K lruKey = queue.poll();
            if (lruKey != null) {
                V removed = map.remove(lruKey);
                onRemove.accept(removed);
            }
        }
        queue.add(key);
        map.put(key, value);
    }
}

Android offers an implementation of an LRU Cache . Android 提供了LRU 缓存的实现。 The code is clean and straightforward. 代码简洁明了。

In LRU cache least recently used element is removed once the cache size reached to the maximum. 在LRU高速缓存中,一旦高速缓存大小达到最大值,就会删除最近最少使用的元素。

LRU cache evicts the least recently used entry and LRU Cache size is fixed. LRU高速缓存逐出最近最少使用的条目,并且LRU高速缓存大小是固定的。

It supports the get(key) and put(key,value) methods to get and put respectively. 它支持分别用于获取和放置的get(key)和put(key,value)方法。 In the case when cache is full, the operation put() initially removes the entry which is least recently used and then it will add a new entry to it. 在缓存已满的情况下,操作put()首先会删除最近最少使用的条目,然后它将向其中添加一个新条目。

Complete code can be seen here . 完整的代码可以在这里看到。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM