[英]Elasticsearch alpine docker with jdk8 java.time.Instant causes epochSecond error
我最近嘗試了2.4.6-alpine,將java.util.Date
更改為JDK 8 java.time.Instant
日志文檔正在使用spring-boot自動注入。
import java.time.Instant;
@Document(indexName = "log")
public class Log {
@Id
private String id;
@Field(type = FieldType.Date, store = true)
private Instant timestamp = null;
...
以前的Log文檔看起來像這樣。
import java.util.Date;
@Document(indexName = "log")
public class Log {
@Id
private String id;
@Field(type = FieldType.Date, store = true)
private Date timestamp = null;
在帶有java.util.Date
ES 2.4.6-alpine和帶有java.time.Instant
ES 2.4.6上,我沒有任何問題。
但是,在帶有java.time.Instant
ES 2.4.6-alpine上,我看到以下錯誤。 高山linux和java.time格式似乎是一個問題。
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [/v1] threw exception [Request processing failed; nested exception is MapperParsingException[failed to parse [timestamp]]; nested: IllegalArgumentException[unknown property [epochSecond]];] with root cause
java.lang.IllegalArgumentException: unknown property [epochSecond]
at org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:520)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:241)
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:321)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:311)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:328)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:533)
at org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:510)
at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:66)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
有什么建議將java.time。*與alpine elasticsearch一起使用嗎?
在docker-compose up -d
命令之后,當我curl -xGET localhost:9200/*
我看到有一些初始數據。 即使在-XDELETE
和-XDELETE
docker-compose down
和-XDELETE
docker-compose up -d
命令之后,該數據也會返回。
來自elasticsearch:2.4.6和elasticsearch:2.4.6-alpine泊塢窗的初始數據是相同的。
{
"log":{
"aliases":{},
"mappings":{
"log":{
"properties":{
"timestamp":{
"type":"date",
"store":true,
"format":"strict_date_optional_time||epoch_millis"
}
}
}
},
"settings":{
"index":{
"refresh_interval":"1s",
"number_of_shards":"5",
"creation_date":"1513716676662",
"store":{
"type":"fs"
},
"number_of_replicas":"1",
"uuid":"qlj9xxxxxxxxxxxxxxoisA",
"version":{
"created":"2040699"
}
}
},
"warmers":{}
}
}
啊 在spring-boot服務啟動我的Elasticsearch實現中使用的Log文檔類的自動注入期間,填充初始數據。
在org.springframework.data.elasticsearch.annotation.DateFormat類的javadoc中找到了一個很好的日期時間格式參考。 SOO許多時間格式的名稱,沒有一個匹配我的輸出:(
http://nocf-www.elastic.co/guide/zh-CN/elasticsearch/reference/current/mapping-date-format.html
此錯誤通常是由於您提交的文檔格式與先前索引的文檔沖突而導致的。 (例如將日期格式從Java date更改為Java Instant)
更改文檔格式時,需要從ElasticSearch清除相應的索引。
您可以使用DELETE API清除索引(可以使用*清除所有類似
curl -XDELETE localhost:9200/*
),以及用於驗證干凈索引的GET API。
( curl -XGET localhost:9200/*
,或者只是在瀏覽器中轉到http:// localhost:9200 / * 。 {}
表示您的索引為空)
(這是假設您也沒有嘗試創建新的ES 2.4.6-alpine進行測試。我已經看到其他人使用docker安裝程序進行其他操作,但實際上並沒有擺脫“干凈”安裝程序所有舊數據)
要使它與elasticsearch:2.4.6-alpine docker和我的帶有自動注入功能的spring-boot 1.5.9-RELEASE一起使用,我必須添加format = DateFormat.custom, pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
到format = DateFormat.custom, pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
批注。 顯然,默認的org.springframework.data.elasticsearch.annotations.DateFormat.none
不適用於在高山上運行的elasticsearch。
它必須從Alpine 3.7 OS中獲得比CentOS OS不兼容的時間日期格式。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.