简体   繁体   English

KAFKA客户端库(confluent-kafka-go):auto.offset.reset = latest情况下consumer和producer的同步

[英]KAFKA client library (confluent-kafka-go): synchronisation between consumer and producer in the case of auto.offset.reset = latest

I have a use case where I want to implement synchronous request / response on top of kafka.我有一个用例,我想在 kafka 之上实现同步请求/响应。 For example when the user sends an HTTP request, I want to produce a message on a specific kafka input topic that triggers a dataflow eventually resulting in a response produced on an output topic.例如,当用户发送 HTTP 请求时,我想在特定的 kafka 输入主题上生成一条消息,该消息会触发数据流,最终导致在 output 主题上产生响应。 I want then to consume the message from the output topic and return the response to the caller.然后我想使用来自 output 主题的消息并将响应返回给调用者。

The workflow is: HTTP Request -> produce message on input topic -> (consume message from input topic -> app logic -> produce message on output topic) -> consume message from output topic -> HTTP Response.工作流程是:HTTP 请求 -> 在输入主题上产生消息 -> (从输入主题消费消息 -> 应用程序逻辑 -> 在 output 主题上生产消息) -> 从 output 主题消费消息 -> HTTP 响应。

To implement this case, upon receiving the first HTTP request I want to be able to create on the fly a consumer that will consume from the output topic, before producing a message on the input topic.为了实现这种情况,在收到第一个 HTTP 请求后,我希望能够在运行时创建一个消费者,该消费者将从 output 主题中消费,然后在输入主题上生成消息。 Otherwise there is a possibility that messages on the output topic are "lost".否则 output 主题上的消息有可能“丢失”。 Consumers in my case have a random group.id and have auto.offset.reset = latest for application reasons.在我的例子中,消费者有一个随机的group.idauto.offset.reset = latest出于应用原因。

My question is how I can make sure that the consumer is ready before producing messages.我的问题是如何确保消费者在生成消息之前准备就绪。 I make sure that I call SubscribeTopics before producing messages.我确保在生成消息之前调用SubscribeTopics but in my tests so far when there are no committed offsets and kafka is resetting offsets to latest, there is a possibility that messages are lost and never read by my consumer because kafka sometimes thinks that the consumer registered after the messages have been produced.但是到目前为止,在我的测试中,当没有提交的偏移量并且 kafka 正在将偏移量重置为最新时,消息有可能丢失并且我的消费者从未阅读过,因为 kafka有时认为消费者在消息生成注册。

My workaround so far is to sleep for a bit after I create the consumer to allow kafka to proceed with the commit reset workflow before I produce messages.到目前为止,我的解决方法是在创建消费者之后让 kafka 在我生成消息之前继续执行提交重置工作流。

I have also tried to implement logic in a rebalance call back (triggered by a consumer subscribing to a topic), in which I am calling assign with offset = latest for the topic partition, but this doesn't seem to have fixed my issue.我还尝试在重新平衡回调(由订阅主题的消费者触发)中实现逻辑,我在其中为主题分区调用assign with offset = latest,但这似乎没有解决我的问题。

Hopefully there is a better solution out there than sleep.希望有比睡眠更好的解决方案。

Most HTTP client libraries have an implicit timeout.大多数 HTTP 客户端库都有隐式超时。 There's no guarantee your consumer will ever consume an event or that a downstream producer will send data to the "response topic".无法保证您的消费者会消费某个事件,或者下游生产者会将数据发送到“响应主题”。

Instead, have your initial request immediately return 201 Accepted status (or 400, for example, if you do request validation) with some tracking ID.相反,让您的初始请求立即返回 201 已接受状态(或 400,例如,如果您请求验证)和一些跟踪 ID。 Then require polling GET requests by-id for status updates either with 404 status or 200 + some status field within the request body.然后需要轮询 GET 请求 by-id 以获取具有 404 状态或 200 + 请求正文中的某些状态字段的状态更新。

You'll need a database to store intermediate state.您需要一个数据库来存储中间值 state。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 具有 confluent-kafka-go 更改偏移量的 kafka 消费者 - kafka consumer with confluent-kafka-go change offset Kafka 消费者抵消导出 golang -- sharma 或 confluent-kafka-go lib - Kafka consumer offset export golang -- sharma or confluent-kafka-go lib 如何在 confluent-kafka-go 中创建具有多个消费者的消费者组? - how to create consumer group with multiple consumers in confluent-kafka-go? Kafka Consumer配置——auto.offset.reset如何控制消息消费 - Kafka Consumer configuration - How does auto.offset.reset controls the message consumption 如何使用rest api设置kafka connect auto.offset.reset - how to set kafka connect auto.offset.reset with rest api 我们如何使用 confluent-kafka-go 客户端确保可靠和低延迟的 kafka 编写? - How can we ensure reliable and low-latency kafka writing using confluent-kafka-go client? 为什么 Kafka 消费者会忽略我在 auto.offset.reset 参数中的“最早”指令,从而没有从绝对的第一个事件中读取我的主题? - Why is Kafka consumer ignoring my “earliest” directive in the auto.offset.reset parameter and thus not reading my topic from the absolute first event? 在Linux上使用confluent-kafka-go构建Go应用程序 - Building Go Application using confluent-kafka-go on Linux Kafka,融合客户,胶印 - Kafka, Confluent client, Offset confluent_kafka消费者补偿计数重置问题 - confluent_kafka consumer offset count reset problem
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM