简体   繁体   English

Lucene / Hibernate搜索锁定异常

[英]Lucene / Hibernate Search Lock Exception

I use Hibernate Search to index and full-text search items on a web application, problem-less! 我使用Hibernate Search在Web应用程序上索引和全文搜索项目,没有问题!

From my pom.xml: 从我的pom.xml:

<hibernate.search.version>3.4.2.Final</hibernate.search.version>
<apache.lucene.version>3.6.2</apache.lucene.version>
<apache.solr.version>3.6.2</apache.solr.version>
<hibernate.version>3.6.9.Final</hibernate.version>

Now, before going to production I tried to stress test the search feature of my web-application using Apache JMeter. 现在,在开始制作之前,我试图使用Apache JMeter对我的Web应用程序的搜索功能进行压力测试。 When testing with more then one thread, I receive tons of following Exception: 当使用多个线程进行测试时,我会收到大量以下异常:

17:11:57,670 ERROR LogErrorHandler:82 - Exception occurred org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/opt/myapp/item_index/myapp.item.domain.Item/write.lock
Primary Failure:
    Entity myapp.item.domain.Item  Id 4  Work Type  org.hibernate.search.backend.DeleteLuceneWork
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/opt/myapp/item_index/myapp.item.domain.Item/write.lock
    at org.apache.lucene.store.Lock.obtain(Lock.java:84)
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1098)
    at org.hibernate.search.backend.Workspace.createNewIndexWriter(Workspace.java:202)
    at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:180)
    at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:103)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
17:11:57,670 ERROR PerDPQueueProcessor:118 - Unexpected error in Lucene Backend: 
org.hibernate.search.SearchException: Unable to remove class myapp.item.domain.Item#4 from index.
    at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:77)
    at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException
    at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:72)
    ... 7 more

My index creation is done using the following function (called from a @Transactional method): 我的索引创建是使用以下函数完成的(从@Transactional方法调用):

@Override
public void createInitialIndexFromDB() {
    // get session and set flush mode to manually to control the commit
    FullTextSession fullTextSession = getFullTextSession();
    fullTextSession.setFlushMode(FlushMode.MANUAL);
    // do not add any data to the object context
    fullTextSession.setCacheMode(CacheMode.IGNORE);

    addResultsToIndex(fullTextSession, FETCH_ITEMS_TO_INDEX);
    addResultsToIndex(fullTextSession, FETCH_DRAFTS_TO_INDEX);
    addResultsToIndex(fullTextSession, FETCH_RESERVATIONS_TO_INDEX);
    addResultsToIndex(fullTextSession, FETCH_SALES_TO_INDEX);

    fullTextSession.flushToIndexes();
    fullTextSession.clear();
}

private void addResultsToIndex(FullTextSession fullTextSession, String query) {
    ScrollableResults results = fullTextSession.createQuery(query).scroll(
            ScrollMode.FORWARD_ONLY);
    for (int index = 1; results.next(); index++) {
        fullTextSession.index(results.get(0));
        if (index % BATCH_SIZE == 0 || results.isLast()) {
            fullTextSession.flushToIndexes();
            fullTextSession.clear();
        }
    }
}

private FullTextSession getFullTextSession() {
    Session session = this.sessionFactory.getCurrentSession();
    return Search.getFullTextSession(session);
}

After index creation all changes to my indexes are going through a custom FullTextIndexEventListener: 创建索引后,对索引的所有更改都将通过自定义FullTextIndexEventListener进行:

public final class HibernateItemEventListener extends
        FullTextIndexEventListener {

    private static final Logger log = LoggerFactory
            .getLogger(HibernateItemEventListener.class);

    public HibernateItemEventListener() {
        super(Installation.SINGLE_INSTANCE);
    }

    @Override
    public void onPostInsert(PostInsertEvent event) {

        log.debug("onPostInsert");
        if (!isIndexed(event.getEntity()))
            return;

        // Without these checks the elements are added twice to the index!
        if (event.getEntity() instanceof ItemReservation) 
            return;

        if (event.getEntity() instanceof ItemSale)
            return;

        super.onPostInsert(event);
    }

    @Override
    public void onPostUpdate(PostUpdateEvent event) {
        log.debug("onPostUpdate - Start");
        if (!isIndexed(event.getEntity()))
            return;

        Serializable id = event.getId();
        log.debug("onPostUpdate - Need update for id " + id);

        if (used) {
            boolean identifierRollbackEnabled = event.getSession().getFactory()
                    .getSettings().isIdentifierRollbackEnabled();
            final Object entity = event.getEntity();
            if (searchFactoryImplementor.getDocumentBuilderIndexedEntity(entity
                    .getClass()) != null
                    || searchFactoryImplementor
                            .getDocumentBuilderContainedEntity(entity
                                    .getClass()) != null) {

                // Remove item
                if (entity instanceof Item) {
                    Item item = (Item) entity;
                    if (item.getQuantity() < 1) {
                        processWork(entity, id, WorkType.PURGE, event,
                                identifierRollbackEnabled);
                        return;
                    }
                }

                // Remove reservation
                if (entity instanceof ItemReservation) {
                    ItemReservation ir = (ItemReservation) entity;
                    if (ir.getActive() < 1) {
                        processWork(entity, id, WorkType.PURGE, event,
                                identifierRollbackEnabled);
                        return;
                    }
                }

                // Update entity
                processWork(entity, id, WorkType.UPDATE, event,
                        identifierRollbackEnabled);
            } else {
                // Add entity
                processWork(entity, id, WorkType.ADD, event,
                        identifierRollbackEnabled);
            }
        }
    }

    @Override
    public void onPostDelete(PostDeleteEvent event) {
        log.debug("onPostDelete - Start");
        if (!isIndexed(event.getEntity()))
            return;
        log.debug("onPostDelete - Need delete for id " + event.getId());
        super.onPostDelete(event);
    }

    private boolean isIndexed(Object entity) {
        return entity instanceof Item || entity instanceof Draft
                || entity instanceof ItemReservation
                || entity instanceof ItemSale;
    }
}

The exception above doesn't affect the application itself (the search does work), but it sometimes lead to another more critical exception (which I suppose is related with the lock problem): 上面的例外不会影响应用程序本身(搜索确实有效),但它有时会导致另一个更严重的异常(我认为这与锁定问题有关):

17:11:58,866 ERROR LogErrorHandler:82 - Exception occurred java.io.FileNotFoundException: _iz.fdx
java.io.FileNotFoundException: _iz.fdx
    at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:284)
    at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:303)
    at org.apache.lucene.index.LogMergePolicy.sizeBytes(LogMergePolicy.java:193)
    at org.apache.lucene.index.LogByteSizeMergePolicy.size(LogByteSizeMergePolicy.java:45)
    at org.apache.lucene.index.LogMergePolicy.useCompoundFile(LogMergePolicy.java:147)
    at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:593)
    at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
    at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3376)
    at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3485)
    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3467)
    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3451)
    at org.hibernate.search.backend.Workspace.commitIndexWriter(Workspace.java:220)
    at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:109)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
17:11:59,991 ERROR LogErrorHandler:82 - Exception occurred java.io.FileNotFoundException: /opt/myapp/item_index/myapp.item.domain.Item/_iz.cfs (No such file or directory)

Any Hints? 任何提示?

EDIT : After library adjustment and JMeter aggressiveness increased (more threads) the exception and index corruption still appears: 编辑 :库调整和JMeter攻击性增加(更多线程)后,异常和索引损坏仍然出现:

12:08:11,163 ERROR LogErrorHandler:82 - Exception occurred java.io.FileNotFoundException: /opt/myapp/item_index/myapp.item.domain.Item/_8gy.cfs (No such file or directory)
Primary Failure:
    Entity myapp.item.domain.Item  Id 596  Work Type  org.hibernate.search.backend.DeleteLuceneWork
Subsequent failures:
    Entity myapp.item.domain.Item  Id 596  Work Type  org.hibernate.search.backend.AddLuceneWork
    Entity myapp.item.domain.Item  Id 734  Work Type  org.hibernate.search.backend.DeleteLuceneWork
    Entity myapp.item.domain.Item  Id 734  Work Type  org.hibernate.search.backend.AddLuceneWork
    Entity myapp.item.domain.Item  Id 599  Work Type  org.hibernate.search.backend.DeleteLuceneWork
    Entity myapp.item.domain.Item  Id 599  Work Type  org.hibernate.search.backend.AddLuceneWork
    Entity myapp.item.domain.Item  Id 735  Work Type  org.hibernate.search.backend.DeleteLuceneWork
    Entity myapp.item.domain.Item  Id 735  Work Type  org.hibernate.search.backend.AddLuceneWork
    Entity myapp.item.domain.Item  Id 598  Work Type  org.hibernate.search.backend.DeleteLuceneWork
    Entity myapp.item.domain.Item  Id 598  Work Type  org.hibernate.search.backend.AddLuceneWork
    Entity myapp.item.domain.Item  Id 720  Work Type  org.hibernate.search.backend.DeleteLuceneWork
    Entity myapp.item.domain.Item  Id 720  Work Type  org.hibernate.search.backend.AddLuceneWork

java.io.FileNotFoundException: /opt/myapp/item_index/myapp.item.domain.Item/_8gy.cfs (No such file or directory)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
    at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:69)
    at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:90)
    at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:91)
    at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:78)
    at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:66)
    at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:55)
    at org.apache.lucene.index.IndexWriter.getFieldInfos(IndexWriter.java:1193)
    at org.apache.lucene.index.IndexWriter.getCurrentFieldInfos(IndexWriter.java:1213)
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1149)
    at org.hibernate.search.backend.Workspace.createNewIndexWriter(Workspace.java:202)
    at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:180)
    at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:103)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
12:08:11,163 ERROR PerDPQueueProcessor:118 - Unexpected error in Lucene Backend: 
org.hibernate.search.SearchException: Unable to remove class myapp.item.domain.Item#596 from index.
    at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:77)
    at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException

EDIT After debugging: FileNotFound (and therefore IndexWriter NPE) generated at 编辑调试后:生成 FileNotFound(以及IndexWriter NPE)

IndexWriter writer = new IndexWriter( directoryProvider.getDirectory(), writerConfig );

in Workspace.createNewIndexWriter() method. 在Workspace.createNewIndexWriter()方法中。

writerConfig:
matchVersion=LUCENE_31
analyzer=org.apache.lucene.analysis.SimpleAnalyzer
delPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy
commit=null
openMode=CREATE_OR_APPEND
similarity=org.apache.lucene.search.DefaultSimilarity
termIndexInterval=128
mergeScheduler=org.hibernate.search.backend.impl.lucene.overrides.ConcurrentMergeScheduler
default WRITE_LOCK_TIMEOUT=1000
writeLockTimeout=1000
maxBufferedDeleteTerms=-1
ramBufferSizeMB=16.0
maxBufferedDocs=-1
mergedSegmentWarmer=null
mergePolicy=[LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=10, maxMergeSize=2147483648, maxMergeSizeForOptimize=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, useCompoundFile=true]
maxThreadStates=8
readerPooling=false
readerTermsIndexDivisor=1

Hibernate Search is highly coupled to Lucene's API changes, the Lucene version you're using is quite different from the one used during development of Hibernate Search 3.4.2. Hibernate Search与Lucene的API变化高度耦合,您使用的Lucene版本与Hibernate Search 3.4.2开发过程中使用的版本完全不同。

These are the recommended versions, I'm reading them from the pom.xml from the 3.4.2.Final tag : 这些是推荐的版本,我从3.4.2.Final标签的pom.xml中读取它们:

<hibernate.search.version>3.4.2.Final</hibernate.search.version>
<apache.lucene.version>3.1.0</apache.lucene.version>
<apache.solr.version>3.1.0</apache.solr.version>
<hibernate.version>3.6.10.Final</hibernate.version>

from hibernate search forum thread 来自hibernate搜索论坛帖子

I think I solved it. 我想我解决了。

The problem was not the Hibernate Search itself, it was Atomikos which killed threads after the concurrent transaction threshold has been exceeded (probably leaving the file descriptor open). 问题不在于Hibernate Search本身,而是在超出并发事务阈值(可能使文件描述符保持打开状态)之后杀死线程的Atomikos。 I increased the concurrent transaction limit (it was 50) and the problem was gone. 我增加了并发事务限制(它是50),问题就消失了。

Despite this, I still don't understand why is hibernate search opening an index writer each time I perform a lucene query. 尽管如此,我仍然不明白为什么每次执行lucene查询时hibernate搜索都会打开索引编写器。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM