[英]Kafka JDBC sink connector - USE statement is not supported to switch between databases
I am using Kafka JDBC sink connector to sink data to Azure SQL server.我正在使用 Kafka JDBC 接收器连接器将数据接收到 Azure SQL 服务器。 I have tested the connector with one database and it worked fine but when I added more databases, I started seeing the following error:我已经用一个数据库测试了连接器,它工作正常,但是当我添加更多数据库时,我开始看到以下错误:
USE statement is not supported to switch between databases.不支持 USE 语句在数据库之间切换。 Use a new connection to connect to a different database.使用新连接连接到不同的数据库。
Config:配置:
tasks.max: 1
topics: topic_name
connection.url: jdbc:sqlserver://server:port;database=dbname;user=dbuser
connection.user: dbuser
connection.password: dbpass
transforms: unwrap
transforms.unwrap.type: io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones: false
auto.create: true
value.converter: org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable: true
insert.mode: upsert
delete.enabled: true
pk.mode: record_key
Stack:堆:
2020-12-10 11:56:36,990 ERROR WorkerSinkTask{id=NAME-sqlserver-jdbc-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: java.sql.SQLException: com.microsoft.sqlserver.jdbc.SQLServerException: USE statement is not supported to switch between databases. Use a new connection to connect to a different database.
(org.apache.kafka.connect.runtime.WorkerSinkTask) [task-thread-NAME-sqlserver-jdbc-sink-0]
org.apache.kafka.connect.errors.ConnectException: java.sql.SQLException: com.microsoft.sqlserver.jdbc.SQLServerException: USE statement is not supported to switch between databases. Use a new connection to connect to a different database.
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:560)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:323)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.sql.SQLException: com.microsoft.sqlserver.jdbc.SQLServerException: USE statement is not supported to switch between databases. Use a new connection to connect to a different database.
I have identified the issue, I thought in the beginning that the issue is due to having multiple databases inside the db server but turned out that the topic name has prefix.dbo.table_name
in it.我已经确定了问题,一开始我认为问题是由于数据库服务器中有多个数据库,但结果发现主题名称中有prefix.dbo.table_name
。 instead of just table_name
.而不仅仅是table_name
。 Hence, the connector is detecting prefix.dbo
as another database.因此,连接器将prefix.dbo
检测为另一个数据库。
The solution is to use transform dropPrefix.解决方案是使用变换 dropPrefix。
Example, to save data from topic hello.dbo.table1
, hello.dbo.table2
to table1
and table2
in the database, use the following config:例如,要将主题hello.dbo.table1
、 hello.dbo.table2
的数据保存到数据库中的table1
和table2
中,请使用以下配置:
tasks.max: 1
topics: hello.dbo.table1, hello.dbo.table2
connection.url: jdbc:sqlserver://server:port;database=dbname;user=dbuser
connection.user: dbuser
connection.password: dbpass
transforms: dropPrefix,unwrap
transforms.dropPrefix.type: org.apache.kafka.connect.transforms.RegexRouter
transforms.dropPrefix.regex: hello\.dbo\.(.*)
transforms.dropPrefix.replacement: $1
transforms.unwrap.type: io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones: false
auto.create: true
value.converter: org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable: true
insert.mode: upsert
delete.enabled: true
pk.mode: record_key
If it doesn't work as a single connector then you'll need to create one connector per database.如果它不能作为单个连接器工作,那么您需要为每个数据库创建一个连接器。
While the above did resolve the error around the "USE" sql statement.虽然上面确实解决了围绕“USE”sql 语句的错误。 That just lead to another error: ".ConnectException: mock_bv_server.dbo.Mem_User.Value (STRUCT) type doesn't have a mapping to the SQL database column type"这只会导致另一个错误:“.ConnectException:mock_bv_server.dbo.Mem_User.Value (STRUCT) 类型没有映射到 SQL 数据库列类型”
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.