[英]Elasticsearch alpine docker with jdk8 java.time.Instant causes epochSecond error
我最近尝试了2.4.6-alpine,将java.util.Date
更改为JDK 8 java.time.Instant
日志文档正在使用spring-boot自动注入。
import java.time.Instant;
@Document(indexName = "log")
public class Log {
@Id
private String id;
@Field(type = FieldType.Date, store = true)
private Instant timestamp = null;
...
以前的Log文档看起来像这样。
import java.util.Date;
@Document(indexName = "log")
public class Log {
@Id
private String id;
@Field(type = FieldType.Date, store = true)
private Date timestamp = null;
在带有java.util.Date
ES 2.4.6-alpine和带有java.time.Instant
ES 2.4.6上,我没有任何问题。
但是,在带有java.time.Instant
ES 2.4.6-alpine上,我看到以下错误。 高山linux和java.time格式似乎是一个问题。
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [/v1] threw exception [Request processing failed; nested exception is MapperParsingException[failed to parse [timestamp]]; nested: IllegalArgumentException[unknown property [epochSecond]];] with root cause
java.lang.IllegalArgumentException: unknown property [epochSecond]
at org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:520)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:241)
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:321)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:311)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:328)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:533)
at org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:510)
at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:66)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
有什么建议将java.time。*与alpine elasticsearch一起使用吗?
在docker-compose up -d
命令之后,当我curl -xGET localhost:9200/*
我看到有一些初始数据。 即使在-XDELETE
和-XDELETE
docker-compose down
和-XDELETE
docker-compose up -d
命令之后,该数据也会返回。
来自elasticsearch:2.4.6和elasticsearch:2.4.6-alpine泊坞窗的初始数据是相同的。
{
"log":{
"aliases":{},
"mappings":{
"log":{
"properties":{
"timestamp":{
"type":"date",
"store":true,
"format":"strict_date_optional_time||epoch_millis"
}
}
}
},
"settings":{
"index":{
"refresh_interval":"1s",
"number_of_shards":"5",
"creation_date":"1513716676662",
"store":{
"type":"fs"
},
"number_of_replicas":"1",
"uuid":"qlj9xxxxxxxxxxxxxxoisA",
"version":{
"created":"2040699"
}
}
},
"warmers":{}
}
}
啊 在spring-boot服务启动我的Elasticsearch实现中使用的Log文档类的自动注入期间,填充初始数据。
在org.springframework.data.elasticsearch.annotation.DateFormat类的javadoc中找到了一个很好的日期时间格式参考。 SOO许多时间格式的名称,没有一个匹配我的输出:(
http://nocf-www.elastic.co/guide/zh-CN/elasticsearch/reference/current/mapping-date-format.html
此错误通常是由于您提交的文档格式与先前索引的文档冲突而导致的。 (例如将日期格式从Java date更改为Java Instant)
更改文档格式时,需要从ElasticSearch清除相应的索引。
您可以使用DELETE API清除索引(可以使用*清除所有类似
curl -XDELETE localhost:9200/*
),以及用于验证干净索引的GET API。
( curl -XGET localhost:9200/*
,或者只是在浏览器中转到http:// localhost:9200 / * 。 {}
表示您的索引为空)
(这是假设您也没有尝试创建新的ES 2.4.6-alpine进行测试。我已经看到其他人使用docker安装程序进行其他操作,但实际上并没有摆脱“干净”安装程序所有旧数据)
要使它与elasticsearch:2.4.6-alpine docker和我的带有自动注入功能的spring-boot 1.5.9-RELEASE一起使用,我必须添加format = DateFormat.custom, pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
到format = DateFormat.custom, pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
批注。 显然,默认的org.springframework.data.elasticsearch.annotations.DateFormat.none
不适用于在高山上运行的elasticsearch。
它必须从Alpine 3.7 OS中获得比CentOS OS不兼容的时间日期格式。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.