简体   繁体   English

Solr-Sitecore的Java堆空间调整

[英]Solr - Java heap space tuning for Sitecore

In our Sitecore 8.2 installation we use Solr 5.1.0 as an indexing system. 在我们的Sitecore 8.2安装中,我们使用Solr 5.1.0作为索引系统。 Recently we have had some issues like this: 最近,我们遇到了一些类似这样的问题:

[sitecore_analytics_index] org.apache.solr.common.SolrException; [sitecore_analytics_index] org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: Error opening new searcher Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed Caused by: java.lang.OutOfMemoryError: Java heap space org.apache.solr.common.SolrException:打开新的搜索器时出错原因:org.apache.lucene.store.AlreadyClosedException:此IndexWriter已关闭原因:java.lang.OutOfMemoryError:Java堆空间

What is the correct way to choose the heap threshold to give to Solr? 选择要提供给Solr的堆阈值的正确方法是什么?

At the moment, among the different cores, the only one that exceeds a few hundred megabytes is sitecore_analytics_index which has a size of 32.67 GB and these features: 目前,在不同的内核中,唯一一个超过几百兆字节的sitecore_analytics_indexsitecore_analytics_index ,其大小为32.67 GB,并且具有以下功能:

  • Num Docs: 102015908 数量文档:102015908
  • Max Doc: 105114766 最大文档数:105114766
  • Heap Memory Usage: -1 堆内存使用情况:-1
  • Deleted Docs: 3098858 删除的文档:3098858
  • Version: 5563749 版本:5563749
  • Impl:org.apache.solr.core.NRTCachingDirectoryFactory 的Impl:org.apache.solr.core.NRTCachingDirectoryFactory
  • org.apache.lucene.store.NRTCachingDirectory:​NRTCachingDirectory(lockFactory=org.apache.lucene.store.NativeFSLockFactory@​2e51764c;maxCacheMB=48.0 maxMergeSizeMB=4.0) org.apache.lucene.store.NRTCachingDirectory:NRTCachingDirectory(lockFactory = org.apache.lucene.store.NativeFSLockFactory @ 2e51764c; maxCacheMB = 48.0 maxMergeSizeMB = 4.0)

The server has 6 GB of RAM, 4 GB of which dedicated to Java, below some args of the JVM: 服务器具有6 GB的RAM,其中4 GB专用于Java,低于JVM的一些参数:

-XX:+CMSParallelRemarkEnabled-XX:+CMSScavengeBeforeRemark-XX:+ParallelRefProcEnabled-XX:+PrintGCApplicationStoppedTime-XX:+PrintGCDateStamps-XX:+PrintGCDetails-XX:+PrintGCTimeStamps-XX:+PrintHeapAtGC-XX:+PrintTenuringDistribution-XX:+UseCMSInitiatingOccupancyOnly-XX:+UseConcMarkSweepGC-XX:+UseParNewGC-XX:CMSInitiatingOccupancyFraction=50-XX:CMSMaxAbortablePrecleanTime=6000-XX:ConcGCThreads=4-XX:MaxTenuringThreshold=8-XX:NewRatio=3-XX:ParallelGCThreads=4-XX:PretenureSizeThreshold=64m-XX:SurvivorRatio=4-XX:TargetSurvivorRatio=90-Xms4G-Xmx4G-Xss256k-verbose:gc -XX:+ CMSParallelRemarkEnabled-XX:+ CMSScavengeBeforeRemark-XX:+ ParallelRefProcEnabled-XX:+ PrintGCApplicationStoppedTime-XX:+ PrintGCDateStamps-XX:+ PrintGCDetails-XX:+ PrintGCTimeStamps-XX:+ PrintHeapAtGC-XX:+ PrintTenuringDistributionInuption-UP -XX:+ UseConcMarkSweepGC-XX:+ UseParNewGC-XX:CMSInitiatingOccupancyFraction = 50-XX:CMSMaxAbortablePrecleanTime = 6000-XX:ConcGCThreads = 4-XX:MaxTenuringThreshold = 8-XX:NewRatio = 3-XX:ParallelGCThreads = 4-XX:PretenureSizeThreshold = 64m-XX:SurvivorRatio = 4-XX:TargetSurvivorRatio = 90-Xms4G-Xmx4G-Xss256k-verbose:gc

Based on this amount of data, which is the correct configuration of the heap? 基于此数据量,堆的正确配置是什么?

The right amount of memory allocation to JVM should be between 6 and 12 GB out of 8-16 GB dedicated to the server. 为JVM分配的正确内存量应在服务器专用的8-16 GB内存中的6至12 GB之间。

You already have a large analytics index and with time it will grow even more, therefore, you will keep experience the high memory utilisation due to the high amount of index write and commit operations. 您已经拥有一个庞大的分析索引,并且随着时间的推移它会增长得更多,因此,由于大量的索引写入和提交操作,您将继续体验高内存利用率。 I would recommend you to consider sharding your big indexes or using SolrCloud which is under Experimental Support for your Sitecore version 8.2, read more here . 我建议您考虑对您的大索引进行分片或使用Sitecore 8.2版的实验支持下的SolrCloud,在此处了解更多信息

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM