简体   繁体   English

Hashtable的超时机制

[英]Timeout Mechanism for Hashtable

I have a hashtable that under heavy-traffic. 我有一个哈希表,在流量很大的情况下。 I want to add timeout mechanism to hashtable, remove too old records. 我想为哈希表添加超时机制,删除太旧的记录。 My concerns are, - It should be lightweight - Remove operation has not time critical. 我担心的是, - 它应该是轻量级的 - 删除操作没有时间关键。 I mean (timeout value is 1 hour) remove operation can be after 1 hour or and 1 hour 15 minute. 我的意思是(超时值是1小时)删除操作可以在1小时或1小时15分钟后。 There is no problem. 没有问题。

My opinion is, I create a big array (as ring buffer)that store put time and hashtable key, When adding to hashtable, using array index find a next slot on array put time, if array slot empty, put insertion time and HT key, if array slot is not empty, compare insertion time for timeout occured. 我的意见是,我创建了一个大数组(作为环形缓冲区),存储时间和哈希表键,当添加到哈希表时,使用数组索引查找数组上的下一个插槽时间,如果数组插槽为空,则插入时间和HT键,如果数组槽不为空,则比较发生超时的插入时间。
if timeout occured remove from Hashtable (if not removed yet) it not timeout occured, increment index till to find empty slot or timeouted array slot. 如果发生超时从Hashtable中删除(如果尚未删除)则不会发生超时,增加索引直到找到空槽或定时数组槽。 When removing from hashtable there is no operation on big array. 从哈希表中删除时,大数组上没有操作。

Shortly, for every add operation to Hashtable, may remove 1 timeouted element from hashtable or do nothing. 不久,对于Hashtable的每个添加操作,可以从哈希表中删除1个timeouted元素或不执行任何操作。

What is your the more elegant and more lightweight solution ? 您的优雅和轻量级解决方案是什么?

Thanks for helps, 谢谢你的帮助,

My approach would be to use the Guava MapMaker : 我的方法是使用Guava MapMaker

ConcurrentMap<String, MyValue> graphs = new MapMaker()
   .maximumSize(100)
   .expireAfterWrite(1, TimeUnit.HOURS)
   .makeComputingMap(
       new Function<String, MyValue>() {
         public MyValue apply(String string) {
           return calculateMyValue(string);
         }
       });

This might not be exactly what you're describing, but chances it's close enough. 这可能不是正是你所描述的东西,但机会是足够接近。 And it's much easier to produce (plus it's using a well-tested code base). 而且它更容易生产(加上它使用经过充分测试的代码库)。

Note that you can tweak the behaviour of the resulting Map by calling different methods before the make*() call. 请注意,您可以通过在make*()调用之前调用不同的方法来调整生成的Map的行为。

You should rather consider using a LinkedHashMap or maybe a WeakHashMap . 您应该考虑使用LinkedHashMapWeakHashMap

The former has a constructor to set the iteration order of its elements to the order of last access; 前者有一个构造函数,用于将元素的迭代顺序设置为上次访问的顺序; this makes it trivial to remove too old elements. 这使删除太旧的元素变得微不足道。 And its removeEldestEntry method can be overridden to define your own policy on when to remove the eldest entry automatically after the insertion of a new one. 并且可以重写其removeEldestEntry方法,以定义自己的策略,以便在插入新条目后自动删除最长条目。

The latter uses weak references to keys, so any key which has no other reference to it can be automatically garbage collected. 后者使用对密钥的弱引用,因此任何没有其他引用的密钥都可以自动进行垃圾回收。

I think a much easier solution is to use LRUMap from Apache Commons Collections . 我认为更简单的解决方案是使用Apache Commons Collections中的 LRUMap Of course you can write your own data structures if you enjoy it or you want to learn, but this problem is so common that numerous ready-made solutions exist. 当然,如果您喜欢或想要学习,您可以编写自己的数据结构,但这个问题非常普遍,因此存在大量现成的解决方案。 (I'm sure others will point you to other implementations too, after a time your problem will be choosing the right one from them :)) (我相信其他人也会指出你的其他实现,经过一段时间你的问题将从他们中选择正确的:) :)

Under the assumption that the currently most heavily accessed items in your cache structure are in the significant minority, you may well get by with randomly selecting items for removal (you have a low probability of removing something very useful). 假设您的缓存结构中当前访问量最大的项目占据了极少数,您可能会随机选择要删除的项目(删除非常有用的内容的可能性很小)。 I've used this technique and, in this particular application, it worked very well and took next to no implementation effort. 我已经使用过这种技术,并且在这个特定的应用程序中,它运行得非常好,并且接下来没有实施工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM