简体   繁体   中英

Kafka Connect Sink to Cassandra :: java.lang.VerifyError: Bad return type

I'm trying to setup a Kafka Connect Sink to collect data from a topic into a Cassandra Table using the Datastax connector : https://downloads.datastax.com/#akc

Running a standalone worker running directly on the broker, running Kafka 0.10.2.2-1 :

 name=dse-sink connector.class=com.datastax.kafkaconnector.DseSinkConnector tasks.max=1 datastax-java-driver.advanced.protocol.version = V4 key.converter=org.apache.kafka.connect.storage.StringConverter value.converter=org.apache.kafka.connect.storage.StringConverter key.converter.schemas.enable=false value.converter.schemas.enable=false internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false plugin.path=/usr/share/java/kafka-connect-dse/kafka-connect-dse-1.2.1.jar topics=connect-test contactPoints=172.16.0.48 loadBalancing.localDc=datacenter1 port=9042 ignoreErrors=true topic.connect-test.cdrs.test.mapping= kafkakey=key, value=value topic.connect-test.cdrs.test.consistencyLevel=LOCAL_QUORUM

But i have the following error :

 2019-12-23 16:58:43,165] ERROR Task dse-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) java.lang.VerifyError: Bad return type Exception Details: Location: com/fasterxml/jackson/databind/cfg/MapperBuilder.streamFactory()Lcom/fasterxml/jackson/core/TokenStreamFactory; @7: areturn Reason: Type 'com/fasterxml/jackson/core/JsonFactory' (current frame, stack[0]) is not assignable to 'com/fasterxml/jackson/core/TokenStreamFactory' (from method signature) Current Frame: bci: @7 flags: { } locals: { 'com/fasterxml/jackson/databind/cfg/MapperBuilder' } stack: { 'com/fasterxml/jackson/core/JsonFactory' } Bytecode: 0x0000000: 2ab4 0002 b600 08b0 at com.fasterxml.jackson.databind.json.JsonMapper.builder(JsonMapper.java:114) at com.datastax.dsbulk.commons.codecs.json.JsonCodecUtils.getObjectMapper(JsonCodecUtils.java:36) at com.datastax.kafkaconnector.codecs.CodecSettings.init(CodecSettings.java:131) at com.datastax.kafkaconnector.state.LifeCycleManager.lambda$buildInstanceState$9(LifeCycleManager.java:423) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1625) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at com.datastax.kafkaconnector.state.LifeCycleManager.buildInstanceState(LifeCycleManager.java:457) at com.datastax.kafkaconnector.state.LifeCycleManager.lambda$startTask$0(LifeCycleManager.java:106) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at com.datastax.kafkaconnector.state.LifeCycleManager.startTask(LifeCycleManager.java:101) at com.datastax.kafkaconnector.DseSinkTask.start(DseSinkTask.java:74) at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:244) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:145) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

No additional error on cassandra or Kafka side. I see active connection on the cassandra node but nothing arrive in the Keyspace.

Any idea why ?

Imho this is a problem caused by use of the JSON internal converters with BigDecimal data ( see related SO question ). As described in the following blog post , the internal.key.converter and internal.value.converter are deprecated since Kafka 2.0, and shouldn't be explicitly set. Can you comment out all internal. properties & re-try?

PS Also see how JSON + Decimal has changed in Kafka 2.4

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM