简体   繁体   中英

Kafka connect with HDFS Sink connector error

I'm using the 1.1 version of Kafka with Kafka connect and I'm facing an error that I don't understand. I'm not able to connect to HDFS if I set somethink like that:

hdfs.url": "/user/myuser/mydirectory" 

or like that (note that there's no port):

hdfs.url": "hdfs://hostname/user/myuser/mydirectory" 

I have this error :

state: "FAILED",
trace: "java.lang.NullPointerException 
   at io.confluent.connect.hdfs.HdfsSinkTask.close(HdfsSinkTask.java:135) 
   at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:377) 
   at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:576) 
   at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:177) 
   at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170) 
   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214) 
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)

Is it normal? Did I miss something in the documentation? Could you help me please? I find no anwser and no solution to call my namenodes indifferently.

The solution is that "hadoop.conf.dir" was missing in my configuration.

I had to add it cause I'm working with HDFS in high availibility mode.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM