简体   繁体   English

原子地执行多个操作

[英]Atomically perform multiple operations

I'm trying to find a way to perform multiple operations on a ConcurrentHashMap in an atomic manner. 我正在尝试找到一种以原子方式对ConcurrentHashMap执行多个操作的方法。

My logic is like this: 我的逻辑是这样的:

if (!map.contains(key)) {
    map.put(key, value);

    doSomethingElse();
}

I know there is the putIfAbsent method. 我知道有putIfAbsent方法。 But if I use it, I still won't be able to call the doSomethingElse atomically. 但是如果我使用它,我仍然无法原子地调用doSomethingElse

Is there any way of doing such things apart from resorting to synchronization / client-side locking? 除了采用同步/客户端锁定之外,还有什么方法可以做这些事情吗?

If it helps, the doSomethingElse in my case would be pretty complex, involving creating and starting a thread that looks for the key that we just added to the map. 如果它有帮助,在我的情况下doSomethingElse将非常复杂,涉及创建和启动一个线程,查找我们刚刚添加到地图的键。

If it helps, the doSomethingElse in my case would be pretty complex, involving creating and starting a thread that looks for the key that we just added to the map. 如果它有帮助,在我的情况下doSomethingElse将非常复杂,涉及创建和启动一个线程,查找我们刚刚添加到地图的键。

If that's the case, you would generally have to synchronize externally. 如果是这种情况,通常需要在外部进行同步。

In some circumstances (depending on what doSomethingElse() expects the state of the map to be, and what the other threads might do the map), the following may also work: 在某些情况下(取决于doSomethingElse()期望映射的状态,以及其他线程可能执行映射的内容),以下内容也可能有效:

if (map.putIfAbsent(key, value) == null) {
    doSomethingElse();
}

This will ensure that only one thread goes into doSomethingElse() for any given key. 这将确保只有一个线程进入任何给定键的doSomethingElse()

This would work unless you want all putting threads to wait until the first successful thread puts in the map.. 这将有效,除非您希望所有放置线程等到第一个成功的线程放入映射。

if(map.get(key) == null){

  Object ret = map.putIfAbsent(key,value);
  if(ret == null){ // I won the put
     doSomethingElse();
  }
}

Now if many threads are putting with the same key only one will win and only one will doSomethingElse() . 现在,如果许多线程使用相同的key只有一个会赢,只有一个会执行doSomethingElse()

If your design demands that the map access and the other operation be grouped without anybody else accessing the map, then you have no choice but to lock them. 如果您的设计要求在没有其他人访问地图的情况下对地图访问和其他操作进行分组,那么您别无选择,只能锁定它们。 Perhaps the design can be revisited to avoid this need? 也许可以重新设计以避免这种需求?

This also implies that all other accesses to the map must be serialized behind the same lock. 这也意味着必须在同一个锁后面对所有其他对地图的访问进行序列化。

You might keep a lock per entry. 您可以为每个条目保留一个锁。 That would allow concurrent non-locking updates, unless two threads try to access the same element. 这将允许并发非锁定更新,除非两个线程尝试访问同一元素。

class LockedReference<T> {
  Lock lock = new ReentrantLock();;
  T value;
  LockedReference(T value) {this.value=value;}      
}

LockedReference<T> ref = new LockedReference(value);
ref.lock.lock(); //lock on the new reference, there is no contention here
try {
  if (map.putIfAbsent(key, ref)==null) {
    //we have locked on the key before inserting the element
    doSomethingElse();
   }
} finally {ref.lock.unlock();}

later 后来

Object value;
while (true) {
   LockedReference<T> ref = map.get(key)
   if (ref!=null) {
      ref.lock.lock(); 
      //there is no contention, unless a thread is already working on this entry
      try {
         if (map.containsKey(key)) {
          value=ref.value;
          break;      
         } else {
          /*key was removed between get and lock*/
         }
      } finally {ref.lock.unlock();} 
   } else value=null;
}  

A fancier approach would be rewriting ConcurrentHashMap and have a version of putIfAbsent that accepts a Runnable (which is executed if the element was put). 更高级的方法是重写ConcurrentHashMap并拥有一个接受RunnableputIfAbsent版本(如果放置该元素则执行)。 But that would be far far more complex. 但那将会复杂得多。

Basically, ConcurrentHashMap implements locked segments, which is in the middle between one lock per entry, and one global lock for the whole map. 基本上, ConcurrentHashMap实现了锁定段,它位于每个条目一个锁定和整个映射的一个全局锁定之间。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM