简体   繁体   English

使用默认架构注册表客户端而不是Avro架构注册表客户端的Spring Cloud Stream问题

[英]Issue with Spring Cloud Stream using the Default Schema Registry client instead of the Avro Schema Registry client

We are using Kafka with Spring Cloud Stream and we need to connect to a Confluent Schema Registry in our Spring Boot component see https://github.com/donalthurley/KafkaConsumeScsAndConfluent . 我们正在将Kafka与Spring Cloud Stream一起使用,并且需要在Spring Boot组件中连接到Confluent Schema Registry,请参阅https://github.com/donalthurley/KafkaConsumeScsAndConfluent

We have added the following configuration to create the required ConfluentSchemaRegistryClient bean see https://github.com/donalthurley/KafkaConsumeScsAndConfluent/blob/master/src/main/java/com/example/kafka/KafkaConfig.java which should override the default schema registry from Spring Cloud Stream. 我们添加了以下配置来创建所需的ConfluentSchemaRegistryClient bean,请参见https://github.com/donalthurley/KafkaConsumeScsAndConfluent/blob/master/src/main/java/com/example/kafka/KafkaConfig.java ,该配置应覆盖默认模式Spring Cloud Stream中的注册表。

However we have been seeing the following failure intermittently after some deployments. 但是,在某些部署后,我们间歇性地看到以下故障。

org.springframework.messaging.MessageDeliveryException: failed to send Message to channel

The underlying cause shows this stack trace 根本原因显示此堆栈跟踪

Caused by: java.lang.NullPointerException
    at     org.springframework.cloud.stream.schema.client.DefaultSchemaRegistryClient.register(DefaultSchemaRegistryClient.java:71)
    at org.springframework.cloud.stream.schema.avro.AvroSchemaRegistryClientMessageConverter.resolveSchemaForWriting(AvroSchemaRegistryClientMessageConverter.java:238)
    at org.springframework.cloud.stream.schema.avro.AbstractAvroMessageConverter.convertToInternal(AbstractAvroMessageConverter.java:179)
    at org.springframework.messaging.converter.AbstractMessageConverter.toMessage(AbstractMessageConverter.java:201)
    at org.springframework.messaging.converter.AbstractMessageConverter.toMessage(AbstractMessageConverter.java:191)
    at org.springframework.messaging.converter.CompositeMessageConverter.toMessage(CompositeMessageConverter.java:83)
    at org.springframework.cloud.stream.binding.MessageConverterConfigurer$OutboundContentTypeConvertingInterceptor.doPreSend(MessageConverterConfigurer.java:322)
    at org.springframework.cloud.stream.binding.MessageConverterConfigurer$AbstractContentTypeInterceptor.preSend(MessageConverterConfigurer.java:351)
    at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:611)
    at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)

The fact that the DefaultSchemaRegistryClient is being invoked by the AvroSchemaRegistryClientMessageConverter would indicate to us that there is a problem with the wiring of our ConfluentSchemaRegistryClient bean. AvroSchemaRegistryClientMessageConverter调用DefaultSchemaRegistryClient的事实将向我们表明ConfluentSchemaRegistryClient bean的接线存在问题。

Is there something else required in our configuration to ensure the ConfluentSchemaRegistryClient bean is wired correctly? 为了确保ConfluentSchemaRegistryClient bean正确连接,我们的配置中还需要其他一些东西吗?

It worked for me. 它为我工作。 This is what I did : 这就是我所做的:

  • used the configuration class exactly like you did 完全像您一样使用配置类
  • used the @EnableSchemaRegistryClient annotation in the project 在项目中使用了@EnableSchemaRegistryClient批注
  • added the avro serializer to the classpath : io.confluent:kafka-avro-serializer 将avro序列化器添加到类路径中: io.confluent:kafka-avro-serializer
  • set the properties as follows: 设置属性如下:

    spring.cloud.stream.kafka.bindings.channel.consumer.configuration.schema.registry.url=address of your registry spring.cloud.stream.kafka.bindings.channel.consumer.configuration.specific.avro.reader=true spring.cloud.stream.kafka.bindings.channel.consumer.configuration.schema.registry.url =您的注册表地址spring.cloud.stream.kafka.bindings.channel.consumer.configuration.specific.avro.reader = true

where channel corresponds to the channel name in your app. 其中channel对应于您应用中的频道名称。

I think the last property is very important to tell Spring to use Avro serializer instead of the default serializer. 我认为最后一个属性对于告诉Spring使用Avro序列化器而不是默认序列化器非常重要。

I am using Spring Cloud Stream Elmhurst.RELEASE, so properties' names might slightly differ if you use another version. 我正在使用Spring Cloud Stream Elmhurst.RELEASE,因此如果您使用其他版本,则属性的名称可能会略有不同。

I have now moved the @EnableSchemaRegistryClient annotation to the project application class see https://github.com/donalthurley/KafkaConsumeScsAndConfluent/commit/b4cf5427d7ab0a4fed619fe54b042890f5ccb594 and redeployed and this has fixed the issue I had when it was deployed on our environment. 现在,我已将@EnableSchemaRegistryClient注释移至项目应用程序类,请参见https://github.com/donalthurley/KafkaConsumeScsAndConfluent/commit/b4cf5427d7ab0a4fed619fe54b042890f5ccb594并重新部署,这已解决了将其部署到环境中时遇到的问题。

I had been annotating my producer and consumer classes with the @EnableSchemaRegistryClient annotation. 我一直在使用@EnableSchemaRegistryClient注释对生产者和消费者类进行注释。

In all my local testing this had been working against my local docker confluent schema registry. 在我所有的本地测试中,这一直与我的本地docker confluent模式注册表相对应。 However on deploying to our environments it was working most of the time but failed occasionally after some deploys. 但是,在部署到我们的环境中时,它大部分时间都可以正常工作,但是在某些部署后偶尔会失败。

I hadn't succeeded in replicating this locally. 我没有在本地复制此文件。

I noticed also in testing locally that I get the same null pointer exception stack trace if I remove the configuration for the Confluent Schema Registry. 我在本地测试中也注意到,如果删除Confluent Schema Registry的配置,则会得到相同的空指针异常堆栈跟踪。

So the problem I think I was seeing is that the AvroSchemaRegistryClientMessageConverter bean was not wired with the confluent schema registry bean when the @EnableSchemaRegistryClient annotation was not present in the project application class. 所以我想我看到的问题是,当项目应用程序类中不存在@EnableSchemaRegistryClient批注时,AvroSchemaRegistryClientMessageConverter Bean未与融合的架构注册表Bean关联。

I don't understand why exactly that would be necessary but I think it may have solved the issue. 我不明白为什么确实需要这样做,但我认为这可能已经解决了问题。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 无法使用Confluent Schema Registry和Spring Cloud Streams反序列化Avro消息 - Unable to deserialize Avro message using Confluent Schema Registry and Spring Cloud Stream Spring / Avro - 使用融合模式注册表 - Spring / Avro - using confluent schema registry spring-cloud-schema-registry-client 没有转换为 Avro,我得到 org.springframework.http.converter.HttpMessageNotWritableException - spring-cloud-schema-registry-client does not convert to Avro and I get org.springframework.http.converter.HttpMessageNotWritableException AVRO 生成的 pojo's + Kafka + Spring Cloud Schema Registry - AVRO Generated pojo's + Kafka + Spring Cloud Schema Registry 在 Spring 启动应用程序中使用来自 Confluent 的模式注册表与 Avro 和 Kafka - Using Schema Registry from Confluent with Avro and Kafka in Spring Boot Applications Spring 使用 Glue 模式注册表反序列化 AVRO GENERIC_RECORD 启动 kafka 消费者问题 - Spring boot kafka consumer issue deserializing AVRO GENERIC_RECORD using Glue schema registry 使用模式注册表异常的 Avro 序列化 - Avro Serialisation using Schema Registry Exception Kafka Stream with Avro in JAVA , schema.registry.url" 没有默认值 - Kafka Stream with Avro in JAVA , schema.registry.url" which has no default value 带有avro模式注册表的Java kafka流的正确指南 - proper guide for java kafka stream with avro schema registry Java gradle kafka-avro-serializer 和 kafka-schema-registry-client 在部署管道中下载失败 - Java gradle kafka-avro-serializer and kafka-schema-registry-client fails to download in the deployment pipeline
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM