简体   繁体   English

实现LRU缓存的最佳方式

[英]Best way to implement LRU cache

I was looking at this problem of LRU cache implementation where after the size of the cache is full, the least recently used item is popped out and it is replaced by the new item. 我正在研究LRU缓存实现的这个问题,其中在缓存大小已满之后,弹出最近最少使用的项目并将其替换为新项目。

I have two implementations in mind: 我有两个实现:

1). 1)。 Create two maps which looks something like this 创建两个看起来像这样的地图

std::map<timestamp, k> time_to_key
std::map<key, std::pair<timestamp, V>> LRUCache

To insert a new element, we can put the current timestamp and value in the LRUCache . 要插入新元素,我们可以将当前时间戳和值放在LRUCache中 While when the size of the cache is full, we can evict the least recent element by finding the smallest timestamp present in time _to_ key and removing the corresponding key from LRUCache . 当缓存的大小已满时,我们可以通过查找time _to_ key中存在的最小时间戳并从LRUCache中删除相应的键来逐出最近的元素。 Inserting a new item is O(1), updating the timestamp is O(n) (because we need to search the k corresponding to the timestamp in time _to_ key . 插入一个新项是O(1),更新时间戳是O(n)(因为我们需要在时间 _to_ 键中搜索对应于时间戳的k

2). 2)。 Have a linked list in which the least recently used cache is present at the head and the new item is added at the tail. 有一个链表,其中最近最少使用的缓存出现在头部,新项目在尾部添加。 When an item arrives which is already present in the cache, the node corresponding to the key of that item is moved to the tail of the list. 当项目到达时已经存在于高速缓存中,与该项目的键对应的节点被移动到列表的尾部。 Inserting a new item is O(1), updating the timestamp is again O(n) (because we need to move to the tail of the list), and deleting an element is O(1). 插入一个新项是O(1),更新时间戳再次是O(n)(因为我们需要移动到列表的尾部),删除一个元素是O(1)。

Now I have the following questions: 现在我有以下问题:

  1. Which one of these implementations is better for an LRUCache. 这些实现中的哪一个对于LRUCache更好。

  2. Is there any other way to implement the LRU Cache. 有没有其他方法来实现LRU Cache。

  3. In Java, should I use HashMap to implement the LRUCache 在Java中,我应该使用HashMap来实现LRUCache

  4. I have seen questions like, implement a generic LRU cache, and also have seen questions like implementing an LRU cache. 我已经看到了诸如实现通用LRU缓存之类的问题,并且还遇到了诸如实现LRU缓存之类的问题。 Is generic LRU cache different from LRU cache? 通用LRU缓存是否与LRU缓存不同?

Thanks in advance!!! 提前致谢!!!

EDIT: 编辑:

Another way (easiest way) to implement LRUCache in Java is by using LinkedHashMap and by overriding the boolean removeEldestEntry(Map.entry eldest) function. 在Java中实现LRUCache的另一种方法(最简单的方法)是使用LinkedHashMap并重写boolean removeEldestEntry(Map.entry eldest)函数。

If you want an LRU cache, the simplest in Java is LinkedHashMap. 如果你想要一个LRU缓存,Java中最简单的就是LinkedHashMap。 The default behaviour is FIFO however you can changes it to "access order" which makes it an LRU cache. 默认行为是FIFO,但您可以将其更改为“访问顺序”,这使其成为LRU缓存。

public static <K,V> Map<K,V> lruCache(final int maxSize) {
    return new LinkedHashMap<K, V>(maxSize*4/3, 0.75f, true) {
        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    };
}

Note: I have using the constructor which changes the collection from newest first to most recently used first. 注意:我使用的构造函数将集合从最新的第一个更改为最近使用的集合。

From the Javadoc 来自Javadoc

public LinkedHashMap(int initialCapacity,
                     float loadFactor,
                     boolean accessOrder)
Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode.
Parameters:
initialCapacity - the initial capacity
loadFactor - the load factor
accessOrder - the ordering mode - true for access-order, false for insertion-order

When accessOrder is true the LinkedHashMap re-arranges the order of the map whenever you get() an entry which is not the last one. 当accessOrder为true时,只要你得到一个不是最后一个条目的条目,LinkedHashMap就会重新排列映射的顺序。

This way the oldest entry is the least which is recently used. 这样,最旧的条目是最近使用的最小条目。

Considering the cache allows concurrent access, the problem of good design of LRU cache boils down to: 考虑到缓存允许并发访问,LRU缓存的良好设计问题归结为:

a) Can we avoid using mutex while updating the two structures(cache structure and LRU structure). a)我们可以在更新两个结构(缓存结构和LRU结构)时避免使用互斥锁。

b) Will the read (get) operation of cache require mutex? b)缓存的读取(get)操作是否需要互斥锁?

In more detail: Say, if we implements this using java.util.concurrent.ConcuurentHashMap(cache structure) and java.util.concurrent.ConcurrentLinkedQueue(LRU structure) 更详细:比如说,如果我们使用java.util.concurrent.ConcuurentHashMap(缓存结构)和java.util.concurrent.ConcurrentLinkedQueue(LRU结构)实现它

a)Locking both these two structures on edit operations - addEntry(), removeEntry(), evictEntries() etc. a)在编辑操作上锁定这两个结构 - addEntry(),removeEntry(),evictEntries()等。

b) The above could probably pass as slow write operations, but the problem is even for read (get) operation, we need to apply lock on both the structures. b)上面的内容可能会因为慢写操作而传递,但问题是即使对于read(get)操作,我们也需要在两个结构上应用锁定。 Because, get will mean putting the entry in front of the queue for LRU strategy.(Assuming entries are removed from end of the queue). 因为,get意味着将条目放在LRU策略的队列前面。(假设条目从队列的末尾删除)。

Using efficient concurrent structures like ConcurrentHashMap and wait free ConcurrentLinkedQueue and then putting a lock over them is loosing the whole purpose of it. 使用高效的并发结构(如ConcurrentHashMap)并等待自由的ConcurrentLinkedQueue然后对它们进行锁定就会失去它的全部目的。

I implemented an LRU cache with the same approach, however, the LRU structured was updated asynchronously eliminating the need of using any mutex while accessing these structures. 我使用相同的方法实现了LRU缓存,但LRU结构是异步更新的,无需在访问这些结构时使用任何互斥锁。 LRU is an internal detail of the Cache and can be implemented in any manner without affecting the users of the cache. LRU是Cache的内部细节,可以以任何方式实现,而不会影响缓存的用户。

Later, I also read about ConcurrentLinkedHashMap 后来,我还读到了ConcurrentLinkedHashMap

https://code.google.com/p/concurrentlinkedhashmap/ https://code.google.com/p/concurrentlinkedhashmap/

And found that it is also trying to do the same thing. 并发现它也试图做同样的事情。 Not used this structure but perhaps might be a good fit. 没有使用这种结构,但也许可能是一个很好的选择。

I'd like to expand on some of these suggestions with two possible implementations. 我想通过两种可能的实现来扩展其中一些建议。 One not thread safe, and one that might be. 一个不是线程安全的,可能是一个。

Here is a simplest version with a unit test that shows it works. 这是一个最简单的版本,单元测试显示它有效。

First the non-concurrent version: 首先是非并发版本:

import java.util.LinkedHashMap;
import java.util.Map;

public class LruSimpleCache<K, V> implements LruCache <K, V>{

    Map<K, V> map = new LinkedHashMap (  );


    public LruSimpleCache (final int limit) {
           map = new LinkedHashMap <K, V> (16, 0.75f, true) {
               @Override
               protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
                   return super.size() > limit;
               }
           };
    }
    @Override
    public void put ( K key, V value ) {
        map.put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map.get(key);
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        V value =  map.get ( key );
        if (value!=null) {
            map.remove ( key );
            map.put(key, value);
        }
        return value;
    }

    @Override
    public void remove ( K key ) {
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }


}

The true flag will track the access of gets and puts. 真正的标志将跟踪获取和放置的访问。 See JavaDocs. 请参阅JavaDocs。 The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry). 没有构造函数的true标志的removeEdelstEntry只会实现FIFO缓存(请参阅下面关于FIFO和removeEldestEntry的注释)。

Here is the test that proves it works as an LRU cache: 以下测试证明它可以作为LRU缓存使用:

public class LruSimpleTest {

    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        if ( !ok ) die ();

    }

Now for the concurrent version... 现在为并发版本......

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {

    final CacheMap<K, V>[] cacheRegions;


    private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
        private final ReadWriteLock readWriteLock;
        private final int limit;

        CacheMap ( final int limit, boolean fair ) {
            super ( 16, 0.75f, true );
            this.limit = limit;
            readWriteLock = new ReentrantReadWriteLock ( fair );

        }

        protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
            return super.size () > limit;
        }


        @Override
        public V put ( K key, V value ) {
            readWriteLock.writeLock ().lock ();

            V old;
            try {

                old = super.put ( key, value );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return old;

        }


        @Override
        public V get ( Object key ) {
            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.get ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;
        }

        @Override
        public V remove ( Object key ) {

            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.remove ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public V getSilent ( K key ) {
            readWriteLock.writeLock ().lock ();

            V value;

            try {

                value = this.get ( key );
                if ( value != null ) {
                    this.remove ( key );
                    this.put ( key, value );
                }
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public int size () {
            readWriteLock.readLock ().lock ();
            int size = -1;
            try {
                size = super.size ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return size;
        }

        public String toString () {
            readWriteLock.readLock ().lock ();
            String str;
            try {
                str = super.toString ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return str;
        }


    }

    public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
        int cores = Runtime.getRuntime ().availableProcessors ();
        int stripeSize = cores < 2 ? 4 : cores * 2;
        cacheRegions = new CacheMap[ stripeSize ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {

        cacheRegions = new CacheMap[ concurrency ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    private int stripeIndex ( K key ) {
        int hashCode = key.hashCode () * 31;
        return hashCode % ( cacheRegions.length );
    }

    private CacheMap<K, V> map ( K key ) {
        return cacheRegions[ stripeIndex ( key ) ];
    }

    @Override
    public void put ( K key, V value ) {

        map ( key ).put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map ( key ).get ( key );
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        return map ( key ).getSilent ( key );

    }

    @Override
    public void remove ( K key ) {
        map ( key ).remove ( key );
    }

    @Override
    public int size () {
        int size = 0;
        for ( CacheMap<K, V> cache : cacheRegions ) {
            size += cache.size ();
        }
        return size;
    }

    public String toString () {

        StringBuilder builder = new StringBuilder ();
        for ( CacheMap<K, V> cache : cacheRegions ) {
            builder.append ( cache.toString () ).append ( '\n' );
        }

        return builder.toString ();
    }


}

You can see why I cover the non-concurrent version first. 您可以看到为什么我首先覆盖非并发版本。 The above attempts to create some stripes to reduce lock contention. 以上尝试创建一些条带以减少锁争用。 So we it hashes the key and then looks up that hash to find the actual cache. 所以我们哈希键,然后查找该哈希以找到实际的缓存。 This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is. 这使得极限大小更多地是在相当大的误差范围内的建议/粗略猜测,这取决于密钥散列算法的传播程度。

Normally, LRU caches are represented as LIFO structures- a single queue of elements. 通常,LRU高速缓存表示为LIFO结构 - 单个元素队列。 If the one provided by your Standard doesn't allow you to remove objects from the middle, for example, to put them on the top, then you may have to roll your own. 如果您的标准提供的那个不允许您从中间移除对象,例如,将它们放在顶部,那么您可能必须自己滚动。

Problem Statement : 问题陈述 :

Create LRU cache and store Employee object Max =5 objects and find out who login first and after …. 创建LRU缓存并存储Employee对象Max = 5个对象,并找出谁首先登录和之后....

package com.test.example.dto;

import java.sql.Timestamp;
/**
 * 
 * @author Vaquar.khan@gmail.com
 *
 */
public class Employee implements Comparable<Employee> {
    private int     id;
    private String  name;
    private int     age;
    private Timestamp loginTime ;

public int getId() {
    return id;
}

public void setId(int id) {
    this.id = id;
}

public String getName() {
    return name;
}

public void setName(String name) {
    this.name = name;
}

public int getAge() {
    return age;
}

public void setAge(int age) {
    this.age = age;
}

public Timestamp getLoginTime() {
    return loginTime;
}

public void setLoginTime(Timestamp loginTime) {
    this.loginTime = loginTime;
}

@Override
public String toString() {
    return "Employee [id=" + id + ", name=" + name + ", age=" + age + ", loginTime=" + loginTime + "]";
}

Employee(){}

public Employee(int id, String name, int age, Timestamp loginTime) {
    super();
    this.id = id;
    this.name = name;
    this.age = age;
    this.loginTime = loginTime;
}

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + age;
    result = prime * result + id;
    result = prime * result + ((loginTime == null) ? 0 : loginTime.hashCode());
    result = prime * result + ((name == null) ? 0 : name.hashCode());
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (getClass() != obj.getClass()) return false;
    Employee other = (Employee) obj;
    if (age != other.age) return false;
    if (id != other.id) return false;
    if (loginTime == null) {
        if (other.loginTime != null) return false;
    } else if (!loginTime.equals(other.loginTime)) return false;
    if (name == null) {
        if (other.name != null) return false;
    } else if (!name.equals(other.name)) return false;
    return true;
}

@Override
public int compareTo(Employee emp) {
    if (emp.getLoginTime().before( this.loginTime) ){
        return 1;
    } else if (emp.getLoginTime().after(this.loginTime)) {
        return -1;
    }
    return 0;
}


}

LRUObjectCacheExample LRUObjectCacheExample

package com.test.example;

import java.sql.Timestamp;
import java.util.Calendar;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Map.Entry;
import com.test.example.dto.Employee;
/**
 * 
 * @author Vaquar.khan@gmail.com
 *
 */
public class LRUObjectCacheExample {


    LinkedHashMap<Employee, Boolean>    lruCacheLinkedQueue;

public LRUObjectCacheExample(int capacity) {
    lruCacheLinkedQueue = new LinkedHashMap<Employee, Boolean>(capacity, 1.0f, true) {
        /**
         * 
         */
        private static final long   serialVersionUID    = 1L;

        @Override
        protected boolean removeEldestEntry(
                //calling map's entry method
                Map.Entry<Employee, Boolean> eldest) {
            return this.size() > capacity;
        }
    };
}

void addDataIntoCache(Employee employee) {
    lruCacheLinkedQueue.put(employee, true);
    displayLRUQueue();
}

boolean checkIfDataPresentIntoLRUCaache(int data) {
    return lruCacheLinkedQueue.get(data) != null;
}

 void deletePageNo(int data) {
    if (lruCacheLinkedQueue.get(data) != null){
            lruCacheLinkedQueue.remove(data);
    }
    displayLRUQueue();
}

 void displayLRUQueue() {
    System.out.print("-------------------------------------------------------"+"\n");
    System.out.print("Data into LRU Cache : ");
    for (Entry<Employee, Boolean> mapEntry : lruCacheLinkedQueue.entrySet()) {
        System.out.print("[" + mapEntry.getKey() + "]");
    }
    System.out.println("");
}

public static void main(String args[]) {
    Employee employee1 = new Employee(1,"Shahbaz",29, getCurrentTimeStamp());
    Employee employee2 = new Employee(2,"Amit",35,getCurrentTimeStamp());
    Employee employee3 = new Employee(3,"viquar",36,getCurrentTimeStamp());
    Employee employee4 = new Employee(4,"Sunny",20,getCurrentTimeStamp());
    Employee employee5 = new Employee(5,"sachin",28,getCurrentTimeStamp());
    Employee employee6 = new Employee(6,"Sneha",25,getCurrentTimeStamp());
    Employee employee7 = new Employee(7,"chantan",19,getCurrentTimeStamp());
    Employee employee8 = new Employee(8,"nitin",22,getCurrentTimeStamp());
    Employee employee9 = new Employee(9,"sanuj",31,getCurrentTimeStamp());
    //
    LRUObjectCacheExample lru = new LRUObjectCacheExample(5);
    lru.addDataIntoCache(employee5);//sachin
    lru.addDataIntoCache(employee4);//Sunny
    lru.addDataIntoCache(employee3);//viquar
    lru.addDataIntoCache(employee2);//Amit
    lru.addDataIntoCache(employee1);//Shahbaz -----capacity reached
    //
        lru.addDataIntoCache(employee6);/Sneha
        lru.addDataIntoCache(employee7);//chantan
        lru.addDataIntoCache(employee8);//nitin
        lru.addDataIntoCache(employee9);//sanuj
        //
        lru.deletePageNo(3);
        lru.deletePageNo(4);

    }
    private static Timestamp getCurrentTimeStamp(){
        return new java.sql.Timestamp(Calendar.getInstance().getTime().getTime());
    }

}

Results: 结果:

**Data into LRU Cache :** 
[Employee [id=1, name=Shahbaz, age=29, loginTime=2015-10-15 18:47:28.1
[Employee [id=6, name=Sneha, age=25, loginTime=2015-10-15 18:47:28.125
[Employee [id=7, name=chantan, age=19, loginTime=2015-10-15 18:47:28.125
[Employee [id=8, name=nitin, age=22, loginTime=2015-10-15 18:47:28.125
[Employee [id=9, name=sanuj, age=31, loginTime=2015-10-15 18:47:28.125

LinkedHashMap allows you to override the removeEldestEntry function so that whenever a put is executed, you can specify whether to remove the oldest entry or not, thus allowing implementation of an LRU. LinkedHashMap允许您覆盖removeEldestEntry函数,这样无论何时执行put,您都可以指定是否删除最旧的条目,从而允许实现LRU。

myLRUCache = new LinkedHashMap<Long,String>() {
protected boolean removeEldestEntry(Map.Entry eldest) 
{ 
if(this.size()>1000)
    return true;
  else
      return false;
}
}; 

For O(1) access we need a hash table and to maintain order we can use DLL. 对于O(1)访问,我们需要一个哈希表并维护顺序,我们可以使用DLL。 Basic algo is - from page number we can get to DLL node using hash table. 基本算法是 - 从页码我们可以使用哈希表到达DLL节点。 If page exists we can move the node to the head of DLL else insert the node in DLL and hash table. 如果页面存在,我们可以将节点移动到DLL的头部,否则在DLL和散列表中插入节点。 If the size of DLL is full we can remove the least recently used node from tail. 如果DLL的大小已满,我们可以从尾部删除最近最少使用的节点。

Here is an implementation based on doubly linked list and unordered_map in C++. 这是一个基于双向链表和C ++中的unordered_map的实现。

#include <iostream>
#include <unordered_map>
#include <utility>

using namespace std;

// List nodeclass
class Node
{
    public:
    int data;
    Node* next;
    Node* prev;
};

//Doubly Linked list
class DLList
{
    public:

    DLList()
    {
        head = NULL;
        tail = NULL;
        count = 0;
    }

    ~DLList() {}

    Node* addNode(int val);
    void print();
    void removeTail();
    void moveToHead(Node* node);

    int size()
    {
        return count;
    }

    private:
    Node* head;
    Node* tail;
    int count;
};

// Function to add a node to the list

Node* DLList::addNode(int val)
{
    Node* temp = new Node();

    temp->data = val;
    temp->next = NULL;
    temp->prev = NULL;

    if ( tail == NULL )
    {
        tail = temp;
        head = temp;
    }
    else
    {
        head->prev = temp;
        temp->next = head;
        head = temp;
    }

    count++;

    return temp;
}

void DLList::moveToHead(Node* node)
{
    if (head == node)
        return;

    node->prev->next = node->next;

    if (node->next != NULL)
    {
        node->next->prev = node->prev;
    }
    else
    {
        tail = node->prev;
    }
        node->next = head;
        node->prev = NULL;
        head->prev = node;
        head = node;
}

void DLList::removeTail()
{
    count--;

    if (head == tail)
    {
        delete head;
        head = NULL;
        tail = NULL;
    }
    else
    {
        Node* del = tail;
        tail = del->prev;
        tail->next = NULL;
        delete del;
    }
}

void DLList::print()
{
    Node* temp = head;

    int ctr = 0;

    while ( (temp != NULL) && (ctr++ != 25) )
    {
        cout << temp->data << " ";
        temp = temp->next;
    }
    cout << endl;
}

class LRUCache
{
    public:
        LRUCache(int aCacheSize);
        void fetchPage(int pageNumber);

    private:
        int cacheSize;
        DLList dlist;
        unordered_map< int, Node* > directAccess;
};

    LRUCache::LRUCache(int aCacheSize):cacheSize(aCacheSize) { }

    void LRUCache::fetchPage(int pageNumber)
    {
        unordered_map< int, Node* >::const_iterator it = directAccess.find(pageNumber);

        if (it != directAccess.end())
        {
            dlist.moveToHead( (Node*)it->second);
        }
        else
        {
            if (dlist.size() == cacheSize-1)
               dlist.removeTail();

            Node* node = dlist.addNode(pageNumber);

            directAccess.insert(pair< int, Node* >(pageNumber,node));
        }

        dlist.print();
    }

    int main()
    {
        LRUCache lruCache(10);

        lruCache.fetchPage(5);
        lruCache.fetchPage(7);
        lruCache.fetchPage(15);
        lruCache.fetchPage(34);
        lruCache.fetchPage(23);
        lruCache.fetchPage(21);
        lruCache.fetchPage(7);
        lruCache.fetchPage(32);
        lruCache.fetchPage(34);
        lruCache.fetchPage(35);
        lruCache.fetchPage(15);
        lruCache.fetchPage(37);
        lruCache.fetchPage(17);
        lruCache.fetchPage(28);
        lruCache.fetchPage(16);

        return 0;
}

Extending Peter Lawrey's answer 延伸彼得劳雷的答案

The removeEldestEntry(java.util.Map.Entry) method may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map. 可以重写removeEldestEntry(java.util.Map.Entry)方法,以强制在将新映射添加到映射时自动删除过时映射的策略。

So LinkedHashMap's FIFO behavior can be overriden to make LRU 因此,可以覆盖LinkedHashMap的FIFO行为以生成LRU

public class LRUCache<K, V> extends LinkedHashMap<K, V> {

    private int cacheSize;

    public LRUCache(int cacheSize) {
        super(cacheSize * 4 / 3, 0.75f, true);
        this.cacheSize = cacheSize;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() >= cacheSize;
    }

    public static void main(String args[]) {
        LRUCache<Integer, Integer> lruCache = new LRUCache<>(5);

        lruCache.put(1, 1);
        lruCache.put(2, 2);
        lruCache.put(3, 3);
        lruCache.put(1, 4);
        lruCache.put(2, 5);
        lruCache.put(7, 6);

        System.out.println(lruCache.keySet());

        lruCache.put(1, 4);
        lruCache.put(2, 5);

        System.out.println(lruCache.keySet());
    }
} 
class LRU {
    Map<String, Integer> ageMap = new HashMap<String, Integer>();
    int age = 1;

    void addElementWithAge(String element) {

        ageMap.put(element, age);
        age++;

    }

    String getLeastRecent(Map<String, Integer> ageMap) {
        Integer oldestAge = (Integer) ageMap.values().toArray()[0];
        String element = null;
        for (Entry<String, Integer> entry : ageMap.entrySet()) {
            if (oldestAge >= entry.getValue()) {
                oldestAge = entry.getValue();
                element = entry.getKey();
            }
        }
        return element;
    }

    public static void main(String args[]) {
        LRU obj = new LRU();
        obj.addElementWithAge("M1");
        obj.addElementWithAge("M2");
        obj.addElementWithAge("M3");
        obj.addElementWithAge("M4");
        obj.addElementWithAge("M1");
    }

}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM