简体   繁体   English

Spring + Hibernate:查询计划缓存内存使用情况

[英]Spring + Hibernate: Query Plan Cache Memory usage

I'm programming an application with the latest version of Spring Boot.我正在使用最新版本的 Spring Boot 编写应用程序。 I recently became problems with growing heap, that can not be garbage collected.我最近遇到了堆增长的问题,无法进行垃圾收集。 The analysis of the heap with Eclipse MAT showed that, within one hour of running the application, the heap grew to 630MB and with Hibernate's SessionFactoryImpl using more than 75% of the whole heap.使用 Eclipse MAT 对堆的分析表明,在运行应用程序的一小时内,堆增长到 630MB,并且 Hibernate 的 SessionFactoryImpl 使用了整个堆的 75% 以上。

在此处输入图片说明

Is was looking for possible sources around the Query Plan Cache, but the only thing I found was this , but that did not play out.正在寻找有关查询计划缓存的可能来源,但我发现的唯一一件事是this ,但这并没有发挥作用。 The properties were set like this:属性设置如下:

spring.jpa.properties.hibernate.query.plan_cache_max_soft_references=1024
spring.jpa.properties.hibernate.query.plan_cache_max_strong_references=64

The database queries are all generated by the Spring's Query magic, using repository interfaces like in this documentation .数据库查询全部由 Spring 的 Query 魔术生成,使用本文档中的存储库接口。 There are about 20 different queries generated with this technique.使用这种技术生成了大约 20 个不同的查询。 No other native SQL or HQL are used.不使用其他本机 SQL 或 HQL。 Sample:样本:

@Transactional
public interface TrendingTopicRepository extends JpaRepository<TrendingTopic, Integer> {
    List<TrendingTopic> findByNameAndSource(String name, String source);
    List<TrendingTopic> findByDateBetween(Date dateStart, Date dateEnd);
    Long countByDateBetweenAndName(Date dateStart, Date dateEnd, String name);
}

or或者

List<SomeObject> findByNameAndUrlIn(String name, Collection<String> urls);

as example for IN usage.作为 IN 用法的示例。

Question is: Why does the query plan cache keep growing (it does not stop, it ends in a full heap) and how to prevent this?问题是:为什么查询计划缓存不断增长(它不会停止,它以完整的堆结束)以及如何防止这种情况? Did anyone encounter a similar problem?有没有人遇到过类似的问题?

Versions:版本:

  • Spring Boot 1.2.5弹簧靴 1.2.5
  • Hibernate 4.3.10休眠 4.3.10

I've hit this issue as well.我也碰到过这个问题。 It basically boils down to having variable number of values in your IN clause and Hibernate trying to cache those query plans.它基本上归结为在您的 IN 子句中具有可变数量的值,并且 Hibernate 尝试缓存这些查询计划。

There are two great blog posts on this topic.关于这个主题有两篇很棒的博客文章。 The first : 第一个

Using Hibernate 4.2 and MySQL in a project with an in-clause query such as: select t from Thing t where t.id in (?)在项目中使用 Hibernate 4.2 和 MySQL 并带有 in-clause 查询,例如: select t from Thing t where t.id in (?)

Hibernate caches these parsed HQL queries. Hibernate 缓存这些解析的 HQL 查询。 Specifically the Hibernate SessionFactoryImpl has QueryPlanCache with queryPlanCache and parameterMetadataCache .特别是 Hibernate SessionFactoryImplQueryPlanCachequeryPlanCacheparameterMetadataCache But this proved to be a problem when the number of parameters for the in-clause is large and varies.但是当子句的参数数量很大并且变化时,这被证明是一个问题。

These caches grow for every distinct query.这些缓存会随着每个不同的查询而增长。 So this query with 6000 parameters is not the same as 6001.所以这个有 6000 个参数的查询和 6001 是不一样的。

The in-clause query is expanded to the number of parameters in the collection.子句查询扩展到集合中的参数数量。 Metadata is included in the query plan for each parameter in the query, including a generated name like x10_, x11_ , etc.元数据包含在查询中每个参数的查询计划中,包括生成的名称,如 x10_、x11_ 等。

Imagine 4000 different variations in the number of in-clause parameter counts, each of these with an average of 4000 parameters.想象一下子句参数计数的数量有 4000 种不同的变化,每一种都有平均 4000 个参数。 The query metadata for each parameter quickly adds up in memory, filling up the heap, since it can't be garbage collected.每个参数的查询元数据在内存中快速累加,填满堆,因为它不能被垃圾收集。

This continues until all different variations in the query parameter count is cached or the JVM runs out of heap memory and starts throwing java.lang.OutOfMemoryError: Java heap space.这一直持续到查询参数计数中的所有不同变化都被缓存或 JVM 耗尽堆内存并开始抛出 java.lang.OutOfMemoryError: Java heap space。

Avoiding in-clauses is an option, as well as using a fixed collection size for the parameter (or at least a smaller size).避免使用 in-clauses 是一种选择,以及为参数使用固定的集合大小(或至少较小的大小)。

For configuring the query plan cache max size, see the property hibernate.query.plan_cache_max_size , defaulting to 2048 (easily too large for queries with many parameters).要配置查询计划缓存最大大小,请参阅属性hibernate.query.plan_cache_max_size ,默认为2048 (对于具有许多参数的查询来说很容易太大)。

And second (also referenced from the first): 第二个(也从第一个引用):

Hibernate internally uses a cache that maps HQL statements (as strings) to query plans . Hibernate 内部使用缓存将 HQL 语句(作为字符串)映射到查询计划 The cache consists of a bounded map limited by default to 2048 elements (configurable).缓存由默认限制为 2048 个元素(可配置)的有界地图组成。 All HQL queries are loaded through this cache.所有 HQL 查询都通过此缓存加载。 In case of a miss, the entry is automatically added to the cache.在未命中的情况下,条目会自动添加到缓存中。 This makes it very susceptible to thrashing - a scenario in which we constantly put new entries into the cache without ever reusing them and thus preventing the cache from bringing any performance gains (it even adds some cache management overhead).这使得它非常容易受到颠簸 - 在这种情况下,我们不断地将新条目放入缓存中而从未重用它们,从而阻止缓存带来任何性能提升(它甚至增加了一些缓存管理开销)。 To make things worse, it is hard to detect this situation by chance - you have to explicitly profile the cache in order to notice that you have a problem there.更糟糕的是,很难偶然检测到这种情况 - 您必须明确地分析缓存才能注意到那里有问题。 I will say a few words on how this could be done later on.稍后我将就如何做到这一点说几句话。

So the cache thrashing results from new queries being generated at high rates.因此缓存抖动是由高速生成的新查询造成的。 This can be caused by a multitude of issues.这可能是由多种问题引起的。 The two most common that I have seen are - bugs in hibernate which cause parameters to be rendered in the JPQL statement instead of being passed as parameters and the use of an "in" - clause.我见过的两个最常见的是 - 休眠中的错误,导致参数在 JPQL 语句中呈现而不是作为参数传递,以及使用“in” - 子句。

Due to some obscure bugs in hibernate, there are situations when parameters are not handled correctly and are rendered into the JPQL query (as an example check out HHH-6280 ).由于 hibernate 中的一些模糊错误,有些情况下参数没有正确处理并呈现到 JPQL 查询中(例如查看HHH-6280 )。 If you have a query that is affected by such defects and it is executed at high rates, it will thrash your query plan cache because each JPQL query generated is almost unique (containing IDs of your entities for example).如果您有一个受此类缺陷影响的查询并且它以高速率执行,它会破坏您的查询计划缓存,因为生成的每个 JPQL 查询几乎都是唯一的(例如,包含您的实体的 ID)。

The second issue lays in the way that hibernate processes queries with an "in" clause (eg give me all person entities whose company id field is one of 1, 2, 10, 18).第二个问题在于 hibernate 处理带有“in”子句的查询的方式(例如,给我公司 ID 字段为 1、2、10、18 之一的所有个人实体)。 For each distinct number of parameters in the "in"-clause, hibernate will produce a different query - eg select x from Person x where x.company.id in (:id0_) for 1 parameter, select x from Person x where x.company.id in (:id0_, :id1_) for 2 parameters and so on.对于“in”子句中每个不同数量的参数,hibernate 将生成不同的查询 - 例如, select x from Person x where x.company.id in (:id0_) for 1 个参数中select x from Person x where x.company.id in (:id0_, :id1_)用于 2 个参数等。 All these queries are considered different, as far as the query plan cache is concerned, resulting again in cache thrashing.就查询计划缓存而言,所有这些查询都被认为是不同的,再次导致缓存抖动。 You could probably work around this issue by writing a utility class to produce only certain number of parameters - eg 1, 10, 100, 200, 500, 1000. If you, for example, pass 22 parameters, it will return a list of 100 elements with the 22 parameters included in it and the remaining 78 parameters set to an impossible value (eg -1 for IDs used for foreign keys).您可能可以通过编写一个实用程序类来仅生成特定数量的参数(例如 1、10、100、200、500、1000)来解决此问题。例如,如果您传递 22 个参数,它将返回一个包含 100 个参数的列表其中包含 22 个参数且其余 78 个参数设置为不可能值的元素(例如 -1 表示用于外键的 ID)。 I agree that this is an ugly hack but could get the job done.我同意这是一个丑陋的黑客,但可以完成工作。 As a result you will only have at most 6 unique queries in your cache and thus reduce thrashing.因此,您的缓存中最多只能有 6 个唯一查询,从而减少抖动。

So how do you find out that you have the issue?那么你怎么知道你有这个问题呢? You could write some additional code and expose metrics with the number of entries in the cache eg over JMX, tune logging and analyze the logs, etc. If you do not want to (or can not) modify the application, you could just dump the heap and run this OQL query against it (eg using mat ): SELECT l.query.toString() FROM INSTANCEOF org.hibernate.engine.query.spi.QueryPlanCache$HQLQueryPlanKey l .您可以编写一些额外的代码并使用缓存中的条目数量公开指标,例如通过 JMX、调整日志记录和分析日志等。如果您不想(或不能)修改应用程序,您可以转储堆并针对它运行此 OQL 查询(例如使用mat ): SELECT l.query.toString() FROM INSTANCEOF org.hibernate.engine.query.spi.QueryPlanCache$HQLQueryPlanKey l It will output all queries currently located in any query plan cache on your heap.它将输出当前位于堆上任何查询计划缓存中的所有查询。 It should be pretty easy to spot whether you are affected by any of the aforementioned problems.应该很容易发现您是否受到上述任何问题的影响。

As far as the performance impact goes, it is hard to say as it depends on too many factors.至于性能影响,很难说,因为它取决于太多因素。 I have seen a very trivial query causing 10-20 ms of overhead spent in creating a new HQL query plan.我见过一个非常简单的查询,导致在创建新的 HQL 查询计划时花费了 10-20 毫秒的开销。 In general, if there is a cache somewhere, there must be a good reason for that - a miss is probably expensive so your should try to avoid misses as much as possible.一般而言,如果某处有缓存,则必须有充分的理由 - 未命中可能代价高昂,因此您应该尽量避免未命中。 Last but not least, your database will have to handle large amounts of unique SQL statements too - causing it to parse them and maybe create different execution plans for every one of them.最后但并非最不重要的一点是,您的数据库也必须处理大量独特的 SQL 语句 - 导致它解析它们并可能为每个语句创建不同的执行计划。

I have same problems with many(>10000) parameters in IN-queries.我对 IN 查询中的许多(> 10000)个参数有同样的问题。 The number of my parameters is always different and I can not predict this, my QueryCachePlan growing too fast.我的参数数量总是不同的,我无法预测这一点,我的QueryCachePlan增长得太快了。

For database systems supporting execution plan caching, there's a better chance of hitting the cache if the number of possible IN clause parameters lowers.对于支持执行计划缓存的数据库系统,如果可能的 IN 子句参数数量减少,则更有可能命中缓存。

Fortunately Hibernate of version 5.3.0 and higher has a solution with padding of parameters in IN-clause.幸运的是,5.3.0 及更高版本的 Hibernate 有一个在 IN 子句中填充参数的解决方案。

Hibernate can expand the bind parameters to power-of-two: 4, 8, 16, 32, 64. This way, an IN clause with 5, 6, or 7 bind parameters will use the 8 IN clause, therefore reusing its execution plan. Hibernate 可以将绑定参数扩展为 2 的幂:4、8、16、32、64。这样,具有 5、6 或 7 个绑定参数的 IN 子句将使用 8 IN 子句,因此重用其执行计划.

If you want to activate this feature, you need to set this property to true hibernate.query.in_clause_parameter_padding=true .如果要激活此功能,则需要将此属性设置为 true hibernate.query.in_clause_parameter_padding=true

For more information see this article , atlassian .有关更多信息,请参阅本文atlassian

I had the exact same problem using Spring Boot 1.5.7 with Spring Data (Hibernate) and the following config solved the problem (memory leak):我在使用 Spring Boot 1.5.7 和 Spring Data (Hibernate) 时遇到了完全相同的问题,以下配置解决了这个问题(内存泄漏):

spring:
  jpa:
    properties:
      hibernate:
        query:
          plan_cache_max_size: 64
          plan_parameter_metadata_max_size: 32

Starting with Hibernate 5.2.12, you can specify a hibernate configuration property to change how literals are to be bound to the underlying JDBC prepared statements by using the following:从 Hibernate 5.2.12 开始,您可以指定一个 hibernate 配置属性,以通过使用以下内容来更改文字如何绑定到底层 JDBC 准备好的语句:

hibernate.criteria.literal_handling_mode=BIND

From the Java documentation, this configuration property has 3 settings从 Java 文档中,此配置属性有 3 个设置

  1. AUTO (default)自动(默认)
  2. BIND - Increases the likelihood of jdbc statement caching using bind parameters. BIND - 使用绑定参数增加 jdbc 语句缓存的可能性。
  3. INLINE - Inlines the values rather than using parameters (be careful of SQL injection). INLINE - 内联值而不是使用参数(注意 SQL 注入)。

I had a similar issue, the issue is because you are creating the query and not using the PreparedStatement.我有一个类似的问题,问题是因为您正在创建查询而不是使用 PreparedStatement。 So what happens here is for each query with different parameters it creates an execution plan and caches it.所以这里发生的是对于每个具有不同参数的查询,它创建一个执行计划并缓存它。 If you use a prepared statement then you should see a major improvement in the memory being used.如果您使用准备好的语句,那么您应该会看到所用内存的重大改进。

I had a big issue with this queryPlanCache, so I did a Hibernate cache monitor to see the queries in the queryPlanCache.我对这个 queryPlanCache 有一个大问题,所以我做了一个 Hibernate 缓存监视器来查看 queryPlanCache 中的查询。 I am using in QA environment as a Spring task each 5 minutes.我在 QA 环境中每 5 分钟使用一次作为 Spring 任务。 I found which IN queries I had to change to solve my cache problem.我发现我必须更改哪些 IN 查询才能解决我的缓存问题。 A detail is: I am using Hibernate 4.2.18 and I don't know if will be useful with other versions.一个细节是:我使用的是 Hibernate 4.2.18,我不知道对其他版本是否有用。

import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Set;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import org.hibernate.ejb.HibernateEntityManagerFactory;
import org.hibernate.internal.SessionFactoryImpl;
import org.hibernate.internal.util.collections.BoundedConcurrentHashMap;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.dao.GenericDAO;

public class CacheMonitor {

private final Logger logger  = LoggerFactory.getLogger(getClass());

@PersistenceContext(unitName = "MyPU")
private void setEntityManager(EntityManager entityManager) {
    HibernateEntityManagerFactory hemf = (HibernateEntityManagerFactory) entityManager.getEntityManagerFactory();
    sessionFactory = (SessionFactoryImpl) hemf.getSessionFactory();
    fillQueryMaps();
}

private SessionFactoryImpl sessionFactory;
private BoundedConcurrentHashMap queryPlanCache;
private BoundedConcurrentHashMap parameterMetadataCache;

/*
 * I tried to use a MAP and use compare compareToIgnoreCase.
 * But remember this is causing memory leak. Doing this
 * you will explode the memory faster that it already was.
 */

public void log() {
    if (!logger.isDebugEnabled()) {
        return;
    }

    if (queryPlanCache != null) {
        long cacheSize = queryPlanCache.size();
        logger.debug(String.format("QueryPlanCache size is :%s ", Long.toString(cacheSize)));

        for (Object key : queryPlanCache.keySet()) {
            int filterKeysSize = 0;
            // QueryPlanCache.HQLQueryPlanKey (Inner Class)
            Object queryValue = getValueByField(key, "query", false);
            if (queryValue == null) {
                // NativeSQLQuerySpecification
                queryValue = getValueByField(key, "queryString");
                filterKeysSize = ((Set) getValueByField(key, "querySpaces")).size();
                if (queryValue != null) {
                    writeLog(queryValue, filterKeysSize, false);
                }
            } else {
                filterKeysSize = ((Set) getValueByField(key, "filterKeys")).size();
                writeLog(queryValue, filterKeysSize, true);
            }
        }
    }

    if (parameterMetadataCache != null) {
        long cacheSize = parameterMetadataCache.size();
        logger.debug(String.format("ParameterMetadataCache size is :%s ", Long.toString(cacheSize)));
        for (Object key : parameterMetadataCache.keySet()) {
            logger.debug("Query:{}", key);
        }
    }
}

private void writeLog(Object query, Integer size, boolean b) {
    if (query == null || query.toString().trim().isEmpty()) {
        return;
    }
    StringBuilder builder = new StringBuilder();
    builder.append(b == true ? "JPQL " : "NATIVE ");
    builder.append("filterKeysSize").append(":").append(size);
    builder.append("\n").append(query).append("\n");
    logger.debug(builder.toString());
}

private void fillQueryMaps() {
    Field queryPlanCacheSessionField = null;
    Field queryPlanCacheField = null;
    Field parameterMetadataCacheField = null;
    try {
        queryPlanCacheSessionField = searchField(sessionFactory.getClass(), "queryPlanCache");
        queryPlanCacheSessionField.setAccessible(true);
        queryPlanCacheField = searchField(queryPlanCacheSessionField.get(sessionFactory).getClass(), "queryPlanCache");
        queryPlanCacheField.setAccessible(true);
        parameterMetadataCacheField = searchField(queryPlanCacheSessionField.get(sessionFactory).getClass(), "parameterMetadataCache");
        parameterMetadataCacheField.setAccessible(true);
        queryPlanCache = (BoundedConcurrentHashMap) queryPlanCacheField.get(queryPlanCacheSessionField.get(sessionFactory));
        parameterMetadataCache = (BoundedConcurrentHashMap) parameterMetadataCacheField.get(queryPlanCacheSessionField.get(sessionFactory));
    } catch (Exception e) {
        logger.error("Failed fillQueryMaps", e);
    } finally {
        queryPlanCacheSessionField.setAccessible(false);
        queryPlanCacheField.setAccessible(false);
        parameterMetadataCacheField.setAccessible(false);
    }
}

private <T> T getValueByField(Object toBeSearched, String fieldName) {
    return getValueByField(toBeSearched, fieldName, true);
}

@SuppressWarnings("unchecked")
private <T> T getValueByField(Object toBeSearched, String fieldName, boolean logErro) {
    Boolean accessible = null;
    Field f = null;
    try {
        f = searchField(toBeSearched.getClass(), fieldName, logErro);
        accessible = f.isAccessible();
        f.setAccessible(true);
    return (T) f.get(toBeSearched);
    } catch (Exception e) {
        if (logErro) {
            logger.error("Field: {} error trying to get for: {}", fieldName, toBeSearched.getClass().getName());
        }
        return null;
    } finally {
        if (accessible != null) {
            f.setAccessible(accessible);
        }
    }
}

private Field searchField(Class<?> type, String fieldName) {
    return searchField(type, fieldName, true);
}

private Field searchField(Class<?> type, String fieldName, boolean log) {

    List<Field> fields = new ArrayList<Field>();
    for (Class<?> c = type; c != null; c = c.getSuperclass()) {
        fields.addAll(Arrays.asList(c.getDeclaredFields()));
        for (Field f : c.getDeclaredFields()) {

            if (fieldName.equals(f.getName())) {
                return f;
            }
        }
    }
    if (log) {
        logger.warn("Field: {} not found for type: {}", fieldName, type.getName());
    }
    return null;
}
}

We also had a QueryPlanCache with growing heap usage.我们还有一个 QueryPlanCache,堆使用量不断增加。 We had IN-queries which we rewrote, and additionally we have queries which use custom types.我们有重写的 IN 查询,另外我们有使用自定义类型的查询。 Turned out that the Hibernate class CustomType didn't properly implement equals and hashCode thereby creating a new key for every query instance.原来 Hibernate 类 CustomType 没有正确实现 equals 和 hashCode,从而为每个查询实例创建一个新键。 This is now solved in Hibernate 5.3.现在在 Hibernate 5.3 中解决了这个问题。 See https://hibernate.atlassian.net/browse/HHH-12463 .请参阅https://hibernate.atlassian.net/browse/HHH-12463 You still need to properly implement equals/hashCode in your userTypes to make it work properly.您仍然需要在您的 userTypes 中正确实现 equals/hashCode 以使其正常工作。

We had faced this issue with query plan cache growing too fast and old gen heap was also growing along with it as gc was unable to collect it.The culprit was JPA query taking some more than 200000 ids in the IN clause.我们遇到过查询计划缓存增长过快的问题,并且旧的 gen 堆也随之增长,因为 gc 无法收集它。罪魁祸首是 JPA 查询在 IN 子句中使用了超过 200000 个 id。 To optimise the query we used joins instead of fetching ids from one table and passing those in other table select query..为了优化查询,我们使用了连接而不是从一个表中获取 id 并将其传递到其他表选择查询中。

TL;DR: Try to replace the IN() queries with ANY() or eliminate them TL;DR:尝试用 ANY() 替换 IN() 查询或消除它们

Explanation:解释:
If a query contains IN(...) then a plan is created for each amount of values inside IN(...), since the query is different each time.如果查询包含 IN(...),则为 IN(...) 中的每个值创建一个计划,因为每次查询都不同。 So if you have IN('a','b','c') and IN ('a','b','c','d','e') - those are two different query strings/plans to cache.因此,如果您有 IN('a','b','c') 和 IN ('a','b','c','d','e') - 那是两个不同的查询字符串/计划缓存。 This answer tells more about it.这个答案告诉了更多关于它的信息。
In case of ANY(...) a single (array) parameter can be passed, so the query string will remain the same and the prepared statement plan will be cached once (example given below).在 ANY(...) 的情况下,可以传递单个(数组)参数,因此查询字符串将保持不变,并且准备好的语句计划将被缓存一次(下面给出的示例)。

Cause:原因:
This line might cause the issue:此行可能会导致问题:

List<SomeObject> findByNameAndUrlIn(String name, Collection<String> urls);

as under the hood it generates different IN() queries for every amount of values in "urls" collection.在幕后,它为“urls”集合中的每个值生成不同的 IN() 查询。

Warning:警告:
You may have IN() query without writing it and even without knowing about it.您可能会在不编写甚至不知道它的情况下进行 IN() 查询。
ORM's such as Hibernate may generate them in the background - sometimes in unexpected places and sometimes in a non-optimal ways. ORM 之类的 Hibernate 可能会在后台生成它们——有时在意想不到的地方,有时以非最佳的方式。 So consider enabling query logs to see the actual queries you have.因此,请考虑启用查询日志以查看您的实际查询。

Fix:使固定:
Here is a (pseudo)code that may fix issue:这是一个可以解决问题的(伪)代码:

query = "SELECT * FROM trending_topic t WHERE t.name=? AND t.url=?"
PreparedStatement preparedStatement = connection.prepareStatement(queryTemplate);
currentPreparedStatement.setString(1, name); // safely replace first query parameter with name
currentPreparedStatement.setArray(2, connection.createArrayOf("text", urls.toArray())); // replace 2nd parameter with array of texts, like "=ANY(ARRAY['aaa','bbb'])"

But:但:
Don't take any solution as a ready-to-use answer.不要将任何解决方案作为现成的答案。 Make sure to test the final performance on actual/big data before going to production - no matter which answer you choose.确保在投入生产之前测试实际/大数据的最终性能 - 无论您选择哪个答案。 Why?为什么? Because IN and ANY both have pros and cons, and they can bring serious performance issues if used improperly (see examples in references below).因为 IN 和 ANY 都有利有弊,如果使用不当,它们会带来严重的性能问题(参见下面参考资料中的示例)。 Also make sure to use parameter binding to avoid security issues as well.还要确保使用参数绑定来避免安全问题。

References:参考:
100x faster Postgres performance by changing 1 line - performance of Any(ARRAY[]) vs ANY(VALUES()) 通过更改 1 行将 Postgres 性能提高 100 倍- Any(ARRAY[]) 与 ANY(VALUES()) 的性能
Index not used with =any() but used with in - different performance of IN and ANY 索引不与 =any() 一起使用,但与 in 一起使用- IN 和 ANY 的不同性能
Understanding SQL Server query plan cache 了解 SQL Server 查询计划缓存

Hope this helps.希望这可以帮助。 Be sure to leave a feedback whether it worked or not - in order to help people like you.无论是否有效,请务必留下反馈 - 以帮助像您这样的人。 Thanks!谢谢!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM