簡體   English   中英

Elasticsearch索引已損壞

[英]Elasticsearch index got corrupted

我從elasticsearch框中獲取以下日志:

org.apache.lucene.index.CorruptIndexException: [myindex][2] Preexisting corrupted index [corrupted_5Y_pGXmYQOG5PGlZURWqxw] caused by: CorruptIndexException[checksum failed (hardware problem?) : expected=9cf1207c actual=4eda74a3 (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/mnt/vol1/myindex/nodes/0/myindex/index/2/index/_3758.fdt")))]
org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=9cf1207c actual=4eda74a3 (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/mnt/vol1/my/indexnodes/0/indices/myindex/2/index/_3758.fdt")))
    at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:211)
    at org.apache.lucene.codecs.CodecUtil.checksumEntireFile(CodecUtil.java:268)
    at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.checkIntegrity(CompressingStoredFieldsReader.java:535)
    at org.apache.lucene.index.SegmentReader.checkIntegrity(SegmentReader.java:624)
    at org.apache.lucene.index.SegmentMerger.<init>(SegmentMerger.java:61)
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4158)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3768)
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
    at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:106)
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

    at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:452)
    at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:433)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:725)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:578)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:182)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:431)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)

誰能說出解決方法? 此外,減少此類問題的最佳實踐是什么?

我以前也遇到過類似的問題,必須刪除副本盒內容並將其重新分配給群集。 然后我修復了幾天,但今天又重新出現了。

編輯 :問題是所有的Elasticsearch盒都共享同一個硬盤,因此當多個副本嘗試在同一磁盤位置上寫入時,磁盤崩潰了。 這樣做是錯誤的,現在我為每個副本創建了單獨的磁盤。

這取決於您使用的ES版本。 1.3.2之前,您可以嘗試將索引恢復壓縮設置為false

我在1.3.2上遇到了此異常。 原因是磁盤已滿。 一些碎片在一段時間后恢復了,有些則沒有。 重新索引幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM