简体   繁体   中英

Second Level Cache - Why not cache all entities?

In my experience, I have typically used the shared cache setting:

<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>

My process is to then think about which entities are not expected to change often and those that would benefit from the cache, performance wise, and mark those as @Cacheable . My practice of using selective entity caching is a learned convention, but I don't fully understand this approach.

Why not cache all entities? When can caching all entities become a detriment? How can I better gauge this to make a more educated decision?

Some of reasons not to cache entities:

  1. When the entities are changed frequently (so you would end up invalidating/locking them in the cache and re-reading them anyway, but you pay an extra cost of cache maintenance which is not low since cache write operations would be frequent).
  2. If there are a large number of entity instances to cache and none of them is used more frequently than the others within a given period in time. Then you would basically put instances in cache and evict them soon afterwards to make room for new ones, without reading the cached instances frequently enough to make the cache maintenance costs pay off.
  3. If the entities can be changed without Hibernate being aware of that (from an external application or with direct JDBC for example).

If you use ehcache as your provider

<property key="hibernate.cache.use_second_level_cache">true</property>
<property name="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</property>

Then you can configure the cache to limit the resources it uses by setting the ehcache.xml to evict the least used entities as required.

Good article here http://howtodoinjava.com/2013/07/04/hibernate-ehcache-configuration-tutorial/

Generally speaking I would cache everything, just limit the size of the caches.

Hope this helps.

Why not cache all entities? When can caching all entities become a detriment? How can I better gauge this to make a more educated decision?

Generally speaking, for an application to benefit from caching the data should be read-mostly. This means that there are multiple reads per a write/update. If this is not the case, as in write-mostly or write-only (think sampling data from a thermometer), the benefit is not there because there is no saving generated by reading hard to get data from memory.

To make an educated decision you can cache everything and then watch hit/miss ratio for the cache. If it's high (+70%), then you are on the right track.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM