简体   繁体   English

使用带有 Helm 安装的 Kafka/Confluent 的连接器

[英]Using a connector with Helm-installed Kafka/Confluent

I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:我已经按照这些说明使用 Helm 图表https://github.com/confluentinc/cp-helm-charts在本地 Minikube 上安装了 Kafka https://docs.confluent.io/current/installation/installing_cp/cp-helm -charts/docs/index.html像这样:

helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360

The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem). kafka_config.yaml 几乎与默认的 yaml 相同,唯一的例外是我将它缩小到 1 个服务器/代理而不是 3 个(只是因为我试图在我的本地 minikube 上节省资源;希望这与我的问题)。

Also running on Minikube is a MySQL instance.在 Minikube 上运行的还有一个 MySQL 实例。 Here's the output of kubectl get pods --namespace myNamespace :这是kubectl get pods --namespace myNamespace的输出:

在此处输入图片说明

I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC , for instance).我想使用连接器之一(例如Debezium MySQL CDC )连接 MySQL 和 Kafka。 In the instructions, it says:在说明中,它说:

Install your connector安装您的连接器

Use the Confluent Hub client to install this connector with:使用 Confluent Hub 客户端安装此连接器:

confluent-hub install debezium/debezium-connector-mysql:0.9.2

Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.听起来不错,除了 1) 我不知道在哪个 pod 上运行此命令,2) 似乎没有一个 pod 有 confluent-hub 命令可用。

Questions:问题:

  1. Does confluent-hub not come installed via those Helm charts? confluent-hub 不是通过这些 Helm 图表安装的吗?
  2. Do I have to install confluent-hub myself?我必须自己安装 confluent-hub 吗?
  3. If so, which pod do I have to install it on?如果是这样,我必须将它安装在哪个 pod 上?

Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now.理想情况下,这应该可以作为helm脚本的一部分进行配置,但不幸的是,目前还没有。 One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image.解决此问题的一种方法是从 Confluent 的 Kafka Connect Docker 映像构建一个新的 Docker。 Download the connector manually and extract the contents into a folder.手动下载连接器并将内容解压缩到一个文件夹中。 Copy the contents of this to a path in the container.将其内容复制到容器中的路径。 Something like below.像下面这样的东西。

Contents of Dockerfile Dockerfile 的内容

FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java

/usr/share/java is the default location where Kafka Connect looks for plugins. /usr/share/java是 Kafka Connect 寻找插件的默认位置。 You could also use different location and provide the new location ( plugin.path ) during your helm installation.您还可以在helm安装期间使用不同的位置并提供新位置 ( plugin.path )。

Build this image and host it somewhere accessible.构建此映像并将其托管在可访问的地方。 You will also have to provide/override the image and tag details during the helm installation.您还必须在helm安装期间提供/覆盖图像和标签详细信息。

Here is the path to the values.yaml file. values.yaml文件的路径。 You can find the image and plugin.path values here.您可以在此处找到imageplugin.path值。

Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912只是上面 Jegan 评论的附加内容: https ://stackoverflow.com/a/56049585/6002912

You can choose to do the Dockerfile below.您可以选择执行下面的 Dockerfile。 Recommended.受到推崇的。

FROM confluentinc/cp-server-connect-operator:5.4.0.0

RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0

Or you can use a Docker's multi-stage build instead.或者您可以改用 Docker 的多阶段构建。

FROM confluentinc/cp-server-connect-operator:5.4.0.0

COPY --from=debezium/connect:1.0 \
    /kafka/connect/debezium-connector-postgres/ \
    /usr/share/confluent-hub-components/debezium-connector-postgres/

This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.这将帮助您节省为 debezium-connector-postgres 等插件获取正确 jar 文件的时间。

From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors来自 Confluent 文档: https ://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image- contains-c-hub-connectors

The Kafka Connect pod should already have the confluent-hub installed. Kafka Connect pod 应该已经安装了 confluent-hub。 It is that pod you should run the commands on.您应该在该 pod 上运行命令。

cp kafka connect pod 有 2 个容器,其中一个是 cp-kafka-connect-server 容器。该容器安装了 confluent-hub。您可以登录到该容器并在那里运行连接器命令。要登录到该容器,运行以下命令:

kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash

As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH从最新版本的图表开始,这可以使用customEnv.CUSTOM_SCRIPT_PATH来实现

See README.md参见README.md

Script can be passed as a secret and mounted as a volume脚本可以作为秘密传递并作为挂载

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM