简体   繁体   English

如何使用主题正则表达式选项创建具有多个主题的JDBC接收器连接器

[英]How to create the JDBC sink connector with multiple topic using topic regex option

Created a JDBC source connector 创建了JDBC源连接器

catalog.pattern = test_01

source connector configuration 源连接器配置

    {
  "name": "jdbcsource",
  "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
  "connection.url": "jdbc:mysql://192.168.1.8/test_01?nullCatalogMeansCurrent=true",
  "connection.user": "root",
  "connection.password": "********",
  "catalog.pattern": "test_01",
  "mode": "timestamp",
  "timestamp.column.name": "UpdateDate",
  "topic.prefix": "jdbcsource-"
}

Under test_01 database example01 (parent table) examplerole (child table) with foreign reference as user_id(refer the schema below) 在test_01数据库example01(父表)examplerole(子表)下,外部引用为user_id(请参阅下面的架构)

  mysql> describe example01;
+------------------+--------------+------+-----+-------------------+-----------------------------+
| Field            | Type         | Null | Key | Default           | Extra                       |
+------------------+--------------+------+-----+-------------------+-----------------------------+
| user_id          | int(11)      | NO   | PRI | NULL              | auto_increment              |
| user_name        | varchar(255) | NO   |     | NULL              |                             |
| user_description | text         | YES  |     | NULL              |                             |
| CreationDate     | timestamp    | NO   |     | CURRENT_TIMESTAMP |                             |
| UpdateDate       | timestamp    | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
+------------------+--------------+------+-----+-------------------+-----------------------------+
5 rows in set (0.00 sec)

mysql> describe examplerole;
+--------------+---------------+------+-----+-------------------+-----------------------------+
| Field        | Type          | Null | Key | Default           | Extra                       |
+--------------+---------------+------+-----+-------------------+-----------------------------+
| prd_id       | int(11)       | NO   | PRI | NULL              | auto_increment              |
| prd_name     | varchar(355)  | NO   |     | NULL              |                             |
| prd_price    | decimal(10,0) | YES  |     | NULL              |                             |
| user_id      | int(11)       | NO   | MUL | NULL              |                             |
| CreationDate | timestamp     | NO   |     | CURRENT_TIMESTAMP |                             |
| UpdateDate   | timestamp     | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
+--------------+---------------+------+-----+-------------------+-----------------------------+
6 rows in set (0.00 sec)

While creating sink connector with topics.regex option with key reference in the table 使用topic.regex选项创建接收器连接器,并在表中使用键引用

curl -XPOST -H 'Accept: application/json' -H "Content-type: application/json" -d '{
    "name": "MySQL-JDBC-sink-connector",
    "config": {
        "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
        "tasks.max": "1",
    "topics.regex": "jdbcsource*",
    "connection.url": "jdbc:mysql://192.168.1.8/test_01?nullCatalogMeansCurrent=true",
     "connection.user": "root",
    "connection.password": "********",
    "insert.mode": "insert",
    "pk.mode": "record_value",
    "pk.fields": "user_id",
    "auto.create": "true",
     "auto.evolve": "true"

}
}' 'localhost:8083/connectors'
echo "\n"

error for above sink configuration, but topics regex option is given in the configs 上述接收器配置的错误,但在配置中给出了主题正则表达式选项

[2019-06-10 04:54:56,678] ERROR Uncaught exception in REST call to /connector-plugins/io.confluent.connect.jdbc.JdbcSinkConnector/config/validate (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
org.apache.kafka.common.config.ConfigException: Must configure one of topics or topics.regex
    at org.apache.kafka.connect.runtime.SinkConnectorConfig.validate(SinkConnectorConfig.java:96)
    at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:269)
    at org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource.validateConfigs(ConnectorPluginsResource.java:81)
    at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
    at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:243)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
    at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:867)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:542)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
    at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
    at org.eclipse.jetty.server.Server.handle(Server.java:502)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
    at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.

Tried creating multiple jdbc sink connector with option topics:<t1,t2> refer below configuration 尝试使用选项topics:<t1,t2>创建多个jdbc接收器连接器topics:<t1,t2>参考下面的配置

{
  "name": "jdbcsink",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "topics": [
    "jdbcsource-example01",
    "jdbcsource-examplerole"
  ],
  "connection.url": "jdbc:mysql://192.168.1.8/test_01?nullCatalogMeansCurrent=true",
  "connection.user": "root",
  "connection.password": "********",
  "auto.create": "true",
  "auto.evolve": "true"
}

Getting below error 低于错误

[2019-06-10 04:53:19,738] WARN [Consumer clientId=consumer-6, groupId=connect-jdbcsink] Error while fetching metadata with correlation id 2 : { jdbcsource-examplerole=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)
[2019-06-10 04:53:19,739] ERROR WorkerSinkTask{id=jdbcsink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: [ jdbcsource-examplerole]
[2019-06-10 04:53:19,739] ERROR WorkerSinkTask{id=jdbcsink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
[2019-06-10 04:53:19,740] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask)
[2019-06-10 04:53:19,749] INFO After filtering the tables are: `test_01`.`example01`,`test_01`.`examplerole` (io.confluent.connect.jdbc.source.TableMonitorThread)

Tried creating the sink connector with an individual topic, I can able to create the sink connector. 尝试使用单个主题创建接收器连接器,我可以创建接收器连接器。 Facing the above issues while creating multiple sink connectors in a single config 在单个配置中创建多个接收器连接器时面临上述问题

Kindly suggest the configuration option for JDBC multiple sink connector creations using topics.regex and auto.create option 请使用topics.regex and auto.create选项建议JDBC多接收器连接器创建的配置选项

If you're specifying topics.regex , then don't specify topics . 如果您要指定topics.regex ,则不要指定topics Instead of 代替

   "topics": [],
   "topics.regex": "example*",

you just need 您只需要

   "topics.regex": "example*",

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kafka Connect:JDBC源连接器:创建具有多个分区的主题 - Kafka Connect : JDBC Source Connector : create Topic with multiple partitions Kafka JDBC 接收器连接器 - 是否可以将主题数据作为 json 存储在 DB 中 - Kafka JDBC sink connector - is it possible to store the topic data as a json in DB 使用Kafka Connect API JDBC Sink Connector示例的Oracle数据库的Kafka主题 - Kafka Topic to Oracle database using Kafka Connect API JDBC Sink Connector Example 不支持的源数据类型:从 Kafka 主题消费时 JDBC Postgres Sink Connector 中的 STRUCT 错误 - Unsupported source data type: STRUCT error in JDBC Postgres Sink Connector when consuming from Kafka topic 1 个主题映射到两个不同的数据库表 Kafka Sink Connector - 1 topic maps to two different db table Kafka Sink Connector 如何使用 JDBC 接收器连接器将表名转换为大写 - How to convert table name to uppercase using JDBC sink connector Kafka/questDB JDBC 接收器连接器:未使用“topics.regex”创建的表 - Kafka/questDB JDBC Sink Connector: tables not created using "topics.regex" Kafka Connect JDBC Sink-一个接收器配置中每个主题(表)的pk.fields - Kafka Connect JDBC Sink - pk.fields for each topic (table) in one sink configuration 如何在 JDBC Sink Connector 配置中添加多个主题并获取多个目标表中的主题数据? - How to add multiple topics in JDBC Sink Connector configuration and get topics data in multiple target tables? 如何在 JDBC Sink Connector 中配置 hive-jdbc-uber-jar - How to configure hive-jdbc-uber-jar in JDBC Sink Connector
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM