简体   繁体   中英

How to optimize concurrent operations in Java?

I'm still quite shaky on multi-threading in Java. What I describe here is at the very heart of my application and I need to get this right. The solution needs to work fast and it needs to be practically safe. Will this work? Any suggestions/criticism/alternative solutions welcome.


Objects used within my application are somewhat expensive to generate but change rarely, so I am caching them in *.temp files. It is possible for one thread to try and retrieve a given object from cache, while another is trying to update it there. Cache operations of retrieve and store are encapsulated within a CacheService implementation.

Consider this scenario:

Thread 1: retrieve cache for objectId "page_1".
Thread 2: update cache for objectId "page_1".
Thread 3: retrieve cache for objectId "page_2".
Thread 4: retrieve cache for objectId "page_3".
Thread 5: retrieve cache for objectId "page_4".

Note: thread 1 appears to retrieve an obsolete object, because thread 2 has a newer copy of it. This is perfectly OK so I do not need any logic that will give thread 2 priority.

If I synchronize retrieve/store methods on my service, then I'm unnecessarily slowing things down for threads 3, 4 and 5. Multiple retrieve operations will be effective at any given time but the update operation will be called rarely. This is why I want to avoid method synchronization.

I gather I need to synchronize on an object that is exclusively common to thread 1 and 2, which implies a lock object registry. Here, an obvious choice would be a Hashtable but again, operations on Hashtable are synchronized, so I'm trying a HashMap. The map stores a string object to be used as a lock object for synchronization and the key/value would be the id of the object being cached. So for object "page_1" the key would be "page_1" and the lock object would be a string with a value of "page_1".

If I've got the registry right, then additionally I want to protect it from being flooded with too many entries. Let's not get into details why. Let's just assume, that if the registry has grown past defined limit, it needs to be reinitialized with 0 elements. This is a bit of a risk with an unsynchronized HashMap but this flooding would be something that is outside of normal application operation. It should be a very rare occurrence and hopefully never takes place. But since it is possible, I want to protect myself from it.

@Service
public class CacheServiceImpl implements CacheService {
    private static ConcurrentHashMap<String, String> objectLockRegistry=new ConcurrentHashMap<>();

public Object getObject(String objectId) {
    String objectLock=getObjectLock(objectId);
    if(objectLock!=null) {
        synchronized(objectLock) {
            // read object from objectInputStream
    }
}

public boolean storeObject(String objectId, Object object) {
    String objectLock=getObjectLock(objectId);

    synchronized(objectLock) {
        // write object to objectOutputStream
    }
}

private String getObjectLock(String objectId) {
    int objectLockRegistryMaxSize=100_000;

    // reinitialize registry if necessary
    if(objectLockRegistry.size()>objectLockRegistryMaxSize) {
        // hoping to never reach this point but it is not impossible to get here
        synchronized(objectLockRegistry) {
            if(objectLockRegistry.size()>objectLockRegistryMaxSize) {
                objectLockRegistry.clear();
            }
        }
    }

    // add lock to registry if necessary
    objectLockRegistry.putIfAbsent(objectId, new String(objectId));

    String objectLock=objectLockRegistry.get(objectId);
    return objectLock;
}

If you are reading from disk, lock contention is not going to be your performance issue.

You can have both threads grab the lock for the entire cache, do a read, if the value is missing, release the lock, read from disk, acquire the lock, and then if the value is still missing write it, otherwise return the value that is now there.

The only issue you will have with that is the concurrent read trashing the disk... but the OS caches will be hot, so the disk shouldn't be overly trashed.

If that is an issue then switch your cache to holding a Future<V> in place of a <V> .

The get method will become something like:

public V get(K key) {
    Future<V> future;
    synchronized(this) {
        future = backingCache.get(key);
        if (future == null) {
            future = executorService.submit(new LoadFromDisk(key));
            backingCache.put(key, future);
        }
    }
    return future.get();
}

Yes that is a global lock... but you're reading from disk, and don't optimize until you have a proved performance bottleneck...

Oh. First optimization, replace the map with a ConcurrentHashMap and use putIfAbsent and you'll have no lock at all! (BUT only do that when you know this is an issue)

The complexity of your scheme has already been discussed. That leads to hard to find bugs. For example, not only do you lock on non-final variables, but you even change them in the middle of synchronized blocks that use them as a lock. Multi-threading is very hard to reason about, this kind of code makes it almost impossible:

    synchronized(objectLockRegistry) {
        if(objectLockRegistry.size() > objectLockRegistryMaxSize) {
            objectLockRegistry = new HashMap<>(); //brrrrrr...
        }
    }

In particular, 2 simultaneous calls to get a lock on a specific string might actually return 2 different instances of the same string, each stored in a different instance of your hashmap (unless they are interned), and you won't be locking on the same monitor.

You should either use an existing library or keep it a lot simpler.

If your question includes the keywords "optimize", "concurrent", and your solution includes a complicated locking scheme ... you're doing it wrong. It is possible to succeed at this sort of venture, but the odds are stacked against you. Prepare to diagnose bizarre concurrency bugs, including but not limited to, deadlock, livelock, cache incoherency... I can spot multiple unsafe practices in your example code.

Pretty much the only way to create a safe and effective concurrent algorithm without being a concurrency god is to take one of the pre-baked concurrent classes and adapt them to your need. It's just too hard to do unless you have an exceptionally convincing reason.

You might take a look at ConcurrentMap . You might also like CacheBuilder .

Using Threads and synchronize directly is covered by the beginning of most tutorials about multithreading and concurrency. However, many real-world examples require more sophisticated locking and concurrency schemes, which are cumbersome and error prone if you implement them yourself. To prevent reinventing the wheel over an over again, the Java concurrency library was created. There, you can find many classes that will be of great help to you. Try googling for tutorials about java concurrency and locks.

As an example for a lock which might help you, see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReadWriteLock.html .

Rather than roll your own cache I would take a look at Google's MapMaker . Something like this will give you a lock cache that automatically expires unused entries as they are garbage collected:

ConcurrentMap<String,String> objectLockRegistry = new MapMaker()
    .softValues()
    .makeComputingMap(new Function<String,String> {
      public String apply(String s) {
        return new String(s);
      });

With this, the whole getObjectLock implementation is simply return objectLockRegistry.get(objectId) - the map takes care of all the "create if not already present" stuff for you in a safe way.

I Would do it similar, to you: just create a map of Object (new Object()).
But in difference to you i would use TreeMap<String, Object> or HashMap You call that the lockMap. One entry per file to lock. The lockMap is public available to all participating threads.
Each read and write to a specific file, gets the lock from the map. And uses syncrobize(lock) on that lock object.
If the lockMap is not fixed, and its content chan change, then reading and writing to the map must syncronized, too. ( syncronized (this.lockMap) {....})
But your getObjectLock() is not safe, sync that all with your lock. (Double checked lockin is in Java not thread safe!) A recomended book: Doug Lea, Concurrent Programming in Java

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM