简体   繁体   English

发送消息 Kafka Streams Topology

[英]Drop a message Kafka Streams Topology

I would like to know if there is a way to drop a record/message from a stream Topology?我想知道是否有办法从 stream 拓扑中删除记录/消息?

I have a setup like the following:我有如下设置:

 builder.stream("my-source-topic")
                .map(CustomMapper)
                .mapValues(CustomValueMapper)
                .filterNot(CustomFilter)
                .transformValues(CustomValueTransformer)
                .toStream()

Each CustomMapper/CustomFilter etc overrides their respective apply/transform methods they could look like the following, as noted the error might be unrecoverable and this is an ok solution these messages will be handled manually and a respective log is written.每个 CustomMapper/CustomFilter 等都覆盖了它们各自的应用/转换方法,它们可能如下所示,如上所述,错误可能无法恢复,这是一个好的解决方案,这些消息将被手动处理并写入相应的日志。 Assuming the unrecoverable error happens during the first map how do i now prevent the later Stages from even processing the record, i would like to stop the processing of this record and move to the next record.假设在第一个 map 期间发生不可恢复的错误,我现在如何防止后续阶段甚至处理记录,我想停止处理此记录并移至下一个记录。

@Override
    public V transform(K readOnlyKey, V value) {
        try {
        // do some logic
        } catch(Exception e){
            // process error - this might be unrecoverable.
            
            dropRecord(); // this is what i would be looking for if possible
        }
    }

I could kill the thread and have a customUncaughtExceptionHandler reschedule the thread which would not commit the offset and therefor try to process the faulty record again.我可以杀死线程并让 customUncaughtExceptionHandler 重新安排不会提交偏移量的线程,因此尝试再次处理错误记录。

Creating a wrapper for the objects passed would require to add a check in each proccessing step to see if the record is still valid.为传递的对象创建包装器需要在每个处理步骤中添加检查以查看记录是否仍然有效。

Adding a.branch() before each processing step to would also require a decent amount of rework.在每个处理步骤之前添加 a.branch() 也需要大量的返工。

You can drop a message in a Transformer simply by returning null .您只需返回null即可将消息放入Transformer中。 See the Javadoc of Transformer#transform .请参阅Transformer#transform的 Javadoc。 So your example would be:所以你的例子是:

    @Override
    public V transform(K readOnlyKey, V value) {
        try {
        // do some logic
        } catch(Exception e){
            // process error - this might be unrecoverable.
            
            return null;
        }
    }

Note, that you can do this currently only in a Transformer , but not in a ValueTransformer .请注意,您目前只能在Transformer中执行此操作,但不能在ValueTransformer中执行此操作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kafka中的动态流拓扑 - Dynamic Streams Topology in Kafka 更改Kafka-streams拓扑(添加重新分区步骤)是否对消息处理保证有任何影响 - Does changing the Kafka-streams topology( adding a repartition step) has any effect on message processing guarantee Kafka流:无效的拓扑:尚未添加StateStore - Kafka streams: Invalid topology: StateStore is not added yet Kafka Spout阅读了两次有关Storm Topology的消息 - Kafka Spout read twice message on Storm Topology 为什么要使用 @Autowired 在 Spring Boot 应用程序中运行 Kafka Streams 拓扑? - Why to use @Autowired to run Kafka Streams topology in a Spring Boot Application? 如何使用 Quarkus 通过拓扑启动 Kafka-Streams 管道 - How to start a Kafka-Streams Pipeline by Topology using Quarkus 在Kafka Streams应用程序中,有没有办法使用输出主题的通配符列表定义拓扑? - In a Kafka Streams application, is there a way to define a topology with a wildcard list of output topics? 如何以函数风格在 Spring Cloud Kafka Streams 中执行此拓扑? - How to do this topology in Spring Cloud Kafka Streams in function style? 消息密钥在Kafka Streams中为Long - Message key as Long in Kafka Streams 是否可以使用 Kafka Streams 访问消息头? - Is it possible to access message headers with Kafka Streams?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM