简体   繁体   中英

thread safe or not my class?

Service must be cached in-memory data and save data in the database. getAmount(id) retrieves current balance or zero if addAmount() method was not called before for specified id. addAmount(id, amount) increases balance or set if method was called first time. Service must be thread-safe. Thread Safety Is my implementation? What improvements can be made?

public class AccountServiceImpl implements AccountService {
    private static final Logger LOGGER = LoggerFactory.getLogger(AccountServiceImpl.class);
    private LoadingCache cache;
    private AccountDAO accountDAO = new AccountDAOImpl();

public AccountServiceImpl() { cache = CacheBuilder.newBuilder() .expireAfterAccess(1, TimeUnit.HOURS) .concurrencyLevel(4) .maximumSize(10000) .recordStats() .build(new CacheLoader<Integer, Account>() { @Override public Account load(Integer id) throws Exception { return new Account(id, accountDAO.getAmountById(id)); } }); } public Long getAmount(Integer id) throws Exception { synchronized (cache.get(id)) { return cache.get(id).getAmount(); } } public void addAmount(Integer id, Long value) throws Exception { Account account = cache.get(id); synchronized (account) { accountDAO.addAmount(id, value); account.setAmount(accountDAO.getAmountById(id)); cache.put(id, account); } }

}

A race condition could occur if the Account is evicted from the cache and multiple updates to that account are taking place. The eviction results in multiple Account instances, so the synchronization doesn't provide atomicity and a stale value could be inserted into the cache.

The race is more obvious if you change the settings, eg maximumSize(0) . At the current settings likelihood of the race may be rare, but eviction may still occur even after the access. This is because the entry might be chosen for eviction but not yet removed, so a subsequent read succeeds even though the access is ignored from the policy's perspective.

The proper way to do this in Guava is to Cache.invalidate() the entry. The DAO is transactionally updating the system of record, so it ensures atomicity of the operation. The LoadingCache ensures atomicity of an entry being computed, so reads will be blocked while a fresh value is loaded. This results in an extra database lookup which seems unnecessary, but is negligible in practice. Unfortunately there is a tiny potential race even still, because Guava does not invalidate loading entries.

Guava doesn't support the write-through caching behavior you are trying to implement. Its successor, Caffeine , does by exposing Java 8's compute map methods and soon a CacheWriter abstraction. That said, the loading approach Guava expects is simple, elegant, and less error prone than manual updates.

There are two issues here to take care of:

  1. The update of the amount value must be atomic.

If you have declared:

class Account { long amount; }

Changing the field value is not atomic on 32 bit systems. It is atomic on 64 bit systems. See: Are 64 bit assignments in Java atomic on a 32 bit machine?

So, the best way would be to change the declaration to "volatile long amout;" Then the update of the value is always atomic, plus, the volatile ensures that the others Threads/CPUs see the changed value.

That means for updating the single value, you don't need the synchronized block.

  1. Race between inserting and modify

With your synchronized statements you just solve the first problem. But there are multiple races in your code.

See this code:

synchronized (cache.get(id)) {
    return cache.get(id).getAmount();
}

You obviously assume that cache.get(id) returns the same object instance if called for the same id. That is not the case, since a cache essentially does not guarantee this.

Guava Cache blocks until the loading is complete. Other Cache may or may not block, meaning if requests come in in parallel multiple loads will be called resulting in multiple changes of the stored cache value.

Still, Guava Cache is a cache, so the item may be evicted from the cache any time, so for the next get another instance is returned.

Same problem here:

public void addAmount(Integer id, Long value) throws Exception {
   Account account = cache.get(id);
   /* what happens if lots of requests come in and another 
      threads evict the account object from the cache? */
   synchronized (account) {
      . . .

In general: Never synchronize on an object that life cycle is not in your control. BTW: Other cache implementations may store just the serialized object value and return another instance on each request.

Since you have a cache.put after the modify, your solution will probably work. However, the synchronize does just fulfill the purpose of flushing the memory, it may or may not really do the locking.

The update of the cache happens after the value changed in the database. This means an application may read a former value even if it is changed in the database already. This may lead to inconsistencies.

Solution 1

Have a static set of lock objects that you chose by the key value eg by locks[id % locks.length]. See my answer here: Guava Cache, how to block access while doing removal

Solution 2

Use database transactions, and update with the pattern:

Transaction.begin();
cache.remove(id);
accountDAO.addAmount(id, value);
Transaction.commit();

Do not update the value directly inside the cache. This will lead to update races and needs locking again.

If the transactions are solely handled in the DAO, this means for your software architecture that the caching should be implemented in the DAO and not outside.

Solution 3

Why not just store the amount value in the cache? If it is allowed that the cache results may be inconsistent with the database content while updating, the simplest solution is:

public AccountServiceImpl() {
    cache = CacheBuilder.newBuilder()
        .expireAfterAccess(1, TimeUnit.HOURS)
        .concurrencyLevel(4)
        .maximumSize(10000)
        .recordStats()
        .build(new CacheLoader<Integer, Account>() {
            @Override
            public Account load(Integer id) throws Exception {
                return accountDAO.getAmountById(id);
            }
        });
}

Long getAmount(Integer id) {
  return cache.get(id);
} 

void addAmount(Integer id, Long value) {
  accountDAO.addAmount(id, value);
  cache.remove(id);
}

No,

 private LoadingCache cache;

must be final.

cache.get(id)

must be synchronized. Are you using a library for that?

Cache must be synchronized . Otherwise two threads updating amount at same time, you will never be sure of final result. check the implementation of `put' method of used library

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM