简体   繁体   English

如何使用 Kafka Elasticsearch Sink Connector 写入多个不同的 Elasticsearch 集群

[英]How to write to multiple distinct Elasticsearch clusters using the Kafka Elasticsearch Sink Connector

Is is possible to use a single Kafka instance with the Elasticsearch Sink Connector to write to separate Elasticsearch clusters with the same index?是否可以使用带有 Elasticsearch Sink Connector 的单个 Kafka 实例写入具有相同索引的单独 Elasticsearch 集群? Documentation . 文档 The source data may be a backend database or an application.源数据可以是后端数据库或应用程序。 An example use-case is that one cluster may be used for real-time search and the other may be used for analytics.一个示例用例是一个集群可用于实时搜索,另一个可用于分析。

If this is possible, how do I configure the sink connector?如果可能,我该如何配置接收器连接器? If not, I can think of a couple of options:如果没有,我可以想到几个选择:

  1. Use 2 Kafka instances, each pointing to a different Elasticsearch cluster.使用 2 个 Kafka 实例,每个实例指向不同的 Elasticsearch 集群。 Either write to both, or write to one and copy from it to the other.要么写两个,要么写一个,然后从它复制到另一个。
  2. Use a single Kafka instance and write a stream processor which will write to both clusters.使用单个 Kafka 实例并编写一个将写入两个集群的流处理器。

Are there any others?还有其他人吗?

Yes you can do this.是的,你可以这样做。 You can use a single Kafka cluster and single Kafka Connect worker.您可以使用单个 Kafka 集群和单个 Kafka Connect 工作线程。 One connector can write to one Elasticsearch instance, and so if you have multiple destination Elasticsearch you need multiple connectors configured.一个连接器可以写入一个 Elasticsearch 实例,因此如果您有多个目标 Elasticsearch,则需要配置多个连接器。

The usual way to run Kafka Connect is in "distributed" mode (even on a single instance), and then you submit one—or more—connector configurations via the REST API.运行 Kafka Connect 的常用方法是在“分布式”模式下(即使在单个实例上),然后您通过 REST API 提交一个或多个连接器配置。

You don't need a Java client to use Kafka Connect - it's configuration only.使用 Kafka Connect 不需要 Java 客户端——它只是配置。 The configuration, per connector, says where to get the data from (which Kafka topic(s)) and where to write it (which Elasticsearch instance).每个连接器的配置说明从哪里获取数据(哪个 Kafka 主题)以及在哪里写入数据(哪个 Elasticsearch 实例)。

To learn more about Kafka Connect see this talk , this short video , and this specific tutorial on Kafka Connect and Elasticsearch要了解有关 Kafka Connect 的更多信息,请参阅此演讲此短视频以及有关 Kafka Connect 和 Elasticsearch 的特定教程

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 自定义Kafka Connect-ElasticSearch接收器连接器 - Customize Kafka Connect - ElasticSearch Sink Connector Kafka Elasticsearch Sink 连接器:连接错误 - Kafka Elasticsearch Sink Connector: Connection Error Kafka Connect Elasticsearch 带有自定义路由的接收器连接器 - Kafka Connect Elasticsearch Sink Connector with custom _routing Kafka-Elasticsearch 接收器连接器不工作 - Kafka-Elasticsearch Sink Connector not working 如何运行 mongo-kafka 连接器作为 kafka 的源并将其与 logstash 输入集成以使用 elasticsearch 作为接收器? - How to run the mongo-kafka connector as a source for kafka and integrate that with logstash input to use elasticsearch as a sink? 无法使用 elasticsearch sink 连接器(kafka-connect) - Unable to use elasticsearch sink connector (kafka-connect) 使用 Elasticsearch Sink Connector for Kafka 从两个由下划线分隔的值字段创建文档 ID - Create document id from two value fields separated by underscore using Elasticsearch Sink Connector for Kafka 如何在 kafka-connect Confluent 平台的 elasticsearch sink 连接器配置中使用 ca cert? - How to use ca cert in elasticsearch sink connector configuration for kafka-connect confluent platform? 如何激活和配置ElasticSearch Kafka Connect接收器? - How to activate and configure ElasticSearch Kafka Connect sink? Confluent Elasticsearch Sink connector, write.method: “UPSERT” on different key - Confluent Elasticsearch Sink connector, write.method : “UPSERT” on different key
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM