[英]How to write to multiple distinct Elasticsearch clusters using the Kafka Elasticsearch Sink Connector
Is is possible to use a single Kafka instance with the Elasticsearch Sink Connector to write to separate Elasticsearch clusters with the same index?是否可以使用带有 Elasticsearch Sink Connector 的单个 Kafka 实例写入具有相同索引的单独 Elasticsearch 集群? Documentation .
文档。 The source data may be a backend database or an application.
源数据可以是后端数据库或应用程序。 An example use-case is that one cluster may be used for real-time search and the other may be used for analytics.
一个示例用例是一个集群可用于实时搜索,另一个可用于分析。
If this is possible, how do I configure the sink connector?如果可能,我该如何配置接收器连接器? If not, I can think of a couple of options:
如果没有,我可以想到几个选择:
Are there any others?还有其他人吗?
Yes you can do this.是的,你可以这样做。 You can use a single Kafka cluster and single Kafka Connect worker.
您可以使用单个 Kafka 集群和单个 Kafka Connect 工作线程。 One connector can write to one Elasticsearch instance, and so if you have multiple destination Elasticsearch you need multiple connectors configured.
一个连接器可以写入一个 Elasticsearch 实例,因此如果您有多个目标 Elasticsearch,则需要配置多个连接器。
The usual way to run Kafka Connect is in "distributed" mode (even on a single instance), and then you submit one—or more—connector configurations via the REST API.运行 Kafka Connect 的常用方法是在“分布式”模式下(即使在单个实例上),然后您通过 REST API 提交一个或多个连接器配置。
You don't need a Java client to use Kafka Connect - it's configuration only.使用 Kafka Connect 不需要 Java 客户端——它只是配置。 The configuration, per connector, says where to get the data from (which Kafka topic(s)) and where to write it (which Elasticsearch instance).
每个连接器的配置说明从哪里获取数据(哪个 Kafka 主题)以及在哪里写入数据(哪个 Elasticsearch 实例)。
To learn more about Kafka Connect see this talk , this short video , and this specific tutorial on Kafka Connect and Elasticsearch要了解有关 Kafka Connect 的更多信息,请参阅此演讲、此短视频以及有关 Kafka Connect 和 Elasticsearch 的特定教程
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.