[英]Kafka Streams: Store is not ready
我们最近将Kafka升级到v1.1,将Confluent升级到v4.0。但是在升级后,我们遇到了有关状态存储的持续问题。 我们的应用程序开始收集流,并在100次尝试后杀死该应用程序之前检查状态存储区是否准备就绪。 但是升级后,至少有一个流将没有Store is not ready : the state store, <your stream>, may have migrated to another instance
流本身具有RUNNING
状态,消息将流过,但存储状态仍然显示为未准备好。 所以我不知道会发生什么。
我们在3个代理的集群中运行Kafka,下面是一个示例流(不是整个代码):
public BaseStream createStreamInstance() {
final Serializer<JsonNode> jsonSerializer = new JsonSerializer();
final Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();
final Serde<JsonNode> jsonSerde = Serdes.serdeFrom(jsonSerializer, jsonDeserializer);
MessagePayLoadParser<Note> noteParser = new MessagePayLoadParser<Note>(Note.class);
GenericJsonSerde<Note> noteSerde = new GenericJsonSerde<Note>(Note.class);
StreamsBuilder builder = new StreamsBuilder();
//below reducer will use sets to combine
//value1 in the reducer is what is already present in the store.
//value2 is the incoming message and for notes should have max 1 item in it's list (since its 1 attachment 1 tag per row, but multiple rows per note)
Reducer<Note> reducer = new Reducer<Note>() {
@Override
public Note apply(Note value1, Note value2) {
value1.merge(value2);
return value1;
}
};
KTable<Long, Note> noteTable = builder
.stream(this.subTopic, Consumed.with(jsonSerde, jsonSerde))
.map(noteParser::parse)
.groupByKey(Serialized.with(Serdes.Long(), noteSerde))
.reduce(reducer);
noteTable.toStream().to(this.pubTopic, Produced.with(Serdes.Long(), noteSerde));
this.stream = new KafkaStreams(builder.build(), this.properties);
return this;
}
这里有一些未解决的问题,例如马蒂亚斯(Matthias)提出的问题,但会尝试回答/为您的实际问题提供帮助:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.