[英]How to enable Kafka sink connector to insert data from topics to tables as and when sink is up
I have developed kafka-sink-connector (using confluent-oss-3.2.0-2.11, connect framework) for my data-store (Amppol ADS), which stores data from kafka topics to corresponding tables in my store. 我为数据存储区(Amppol ADS)开发了kafka-sink-connector(使用confluent-oss-3.2.0-2.11,connect框架),该存储区将kafka主题中的数据存储到我的存储区中的相应表中。
Every thing is working as expected as long as kafka servers and ADS servers are up and running. 只要kafka服务器和ADS服务器启动并运行,一切都会按预期进行。
Need a help/suggestions about a specific use-case where events are getting ingested in kafka topics and underneath sink component (ADS) is down. 需要有关特定用例的帮助/建议,在该特定用例中,事件已吸收到kafka主题中,并且接收器组件(ADS)处于关闭状态。 Expectation here is Whenever a sink servers comes up, records that were ingested earlier in kafka topics should be inserted into the tables;
期望这里是每当接收器服务器出现时,就应该在表中插入先前在kafka主题中提取的记录。
Kindly advise how to handle such a case. 请告知如何处理这种情况。
Is there any support available in connect framework for this..? 连接框架中对此有任何支持吗? or atleast some references will be a great help.
或至少提供一些参考会很有帮助。
SinkConnector offsets are maintained in the _consumer_offsets topic on Kafka against your connector name and when SinkConnector restarts it will pick messages from Kafka server from the previous offset it had stored on the _consumer_offsets topic. SinkConnector偏移量在Kafka的_consumer_offsets主题中与您的连接器名称相对应,当SinkConnector重新启动时,它将从其存储在_consumer_offsets主题中的先前偏移量中选择来自Kafka服务器的消息。
So you don't have to worry anything about managing offsets. 因此,您不必担心管理偏移量。 Its all done by the workers in the Connect framework.
这一切都是由Connect框架中的工作人员完成的。 In your scenario you go and just restart your sink connector.
在您的情况下,您只需重新启动接收器连接器即可。 If the messages are pushed to Kafka by your source connector and are available in the Kafka, sink connector can be started/restarted at any time.
如果消息是通过源连接器推送到Kafka的,并且在Kafka中可用,则接收器连接器可以随时启动/重新启动。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.