I have integrated elastic-search 1.7.1 with spring application.I have a cron job which update the index of elasticsearch on every run. I have followed various example codes available on github to make it work.First I have Autowired ElasticSearchOperations for indexing purpose:
@Autowired
private ElasticsearchOperations elasticsearchOperations;
Then inside loop indexing is perform in following manner
for(int i=0;i<list.size();i++)
{
CategoryProductSearch search = new CategoryProductSearch();
// set data to fields
System.out.println("BEFORE SAVING DATA");
IndexQuery indexQuery =new
IndexQueryBuilder().withId(search.getId()).withObject(search).build();
//indexQuery.setId(search.getId());
//indexQuery.setObject(search);
//elasticsearchOperations.createIndex(CategoryProductSearch.class);
elasticsearchOperations.putMapping(CategoryProductSearch.class);
elasticsearchOperations.index(indexQuery);
elasticsearchOperations.refresh(CategoryProductSearch.class,true);
System.out.println("SAVING DATA");
}
When i run it for first time it works as expected. I have renamed cluster to "mycluster" in elasticsearch.yml inside config folder. After first run i can see the folder created. Indexing and searching(Implemented in another file) works perfectly. But sometimes the code get stuck at below mentioned line and shows continuous warning [Chase Stein] node null not part of the cluster Cluster [elasticsearch], ignoring...
elasticsearchOperations.putMapping(CategoryProductSearch.class);
Then after some time it throws NoNodeAvailableException. I have read about this issue and it says there might not be enough disk space for elastic-search to index data. I am new to spring and have tried elastic-search for the first time. Is this a disk space issue or something wrong in a way i am indexing data? Also if i manually delete "mycluster" folder from /data directory and restart application it works fine again.!
I have everything set up on my local PC. Whenever i restart elasticsearch service this issue comes in.
Stack trace for the exception is:
org.elasticsearch.action.UnavailableShardsException: [mycluster][0]
Primary shard is not active or isn't assigned to a known node. Timeout:
[1m], request: index {[mycluster][categoryproductsearch][1],
source[{// Source string }]
at
org.elasticsearch.action.support.replication
.TransportShardReplicationOperationAction$PrimaryPhase
.retryBecauseUnavailable
(TransportShardReplicationOperationAction.java:655)
at
org.elasticsearch.action.support.replication
.TransportShardReplicationOperationAction$PrimaryPhase.doRun
(TransportShardReplicationOperationAction.java:362)
at
org.elasticsearch.common.util.concurrent.AbstractRunnable.run
(AbstractRunnable.java:36)
at
org.elasticsearch.action.support.replication.
TransportShardReplicationOperationAction$PrimaryPhase$3.onTimeout
(TransportShardReplicationOperationAction.java:515)
at
org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener
.onTimeout
(ClusterStateObserver.java:231)
at
org.elasticsearch.cluster.service.
InternalClusterService$NotifyTimeout.run
(InternalClusterService.java:560)
at
java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor.java:615)
at
java.lang.Thread.run(Thread.java:745)
Hi i took this code from generator-jhipster-elasticsearch-reindexer
@Transactional(readOnly = true)
@SuppressWarnings("unchecked")
private <T> void reindexForClass(Class<T> entityClass, JpaRepository<T, Long> jpaRepository,
ElasticsearchRepository<T, Long> elasticsearchRepository) {
elasticsearchTemplate.deleteIndex(entityClass);
try {
elasticsearchTemplate.createIndex(entityClass);
} catch (IndexAlreadyExistsException e) {
// Do nothing. Index was already concurrently recreated by some other service.
}
elasticsearchTemplate.putMapping(entityClass);
if (jpaRepository.count() > 0) {
try {
Method m = jpaRepository.getClass().getMethod("findAllWithEagerRelationships");
elasticsearchRepository.save((List<T>) m.invoke(jpaRepository));
} catch (Exception e) {
elasticsearchRepository.save(jpaRepository.findAll());
}
}
log.info("Elasticsearch: Indexed all rows for " + entityClass.getSimpleName());
}
As you can see index is first deleted then created again an mapping is put, i think that your order is wrong and that it results in some broken shards. You can access Elastic Rest API on localhost:9200 and try a get request /_cat/indices to see your indexes.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.