简体   繁体   English

Kafka Hdfs 2 Sink 连接器无法在 hdfs 上写入

[英]Kafka Hdfs 2 Sink connector not able to write on hdfs

Following is my kafka connector json file:以下是我的 kafka 连接器 json 文件:

curl -s -k -X POST  http://cpnode.local.lan:8083/connectors -H "Content-Type: application/json" --data '{
"name":"jdbc-Hdfs2-Sink-Connector",
"config":{
"tasks.max":"1",
"batch.size":"1000",
"batch.max.rows":"1000",
"hdfs.poll.interval.ms":"500",
"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector",
"hdfs.url":"hdfs://hadoopnode.local.lan:9000",
"topics":"BookList2",
"flush.size":"1",
"confluent.topic.bootstrap.servers":"cpnode.local.lan:9092",
"confluent.topic.replication.factor":"1",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable":"true",
"value.converter.schema.registry.url":"http://cpnode.local.lan:8081",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schemas.enable":"true",
"key.converter.schema.registry.url":"http://cpnode.local.lan:8081"
}
}' | jq '.'

When I try to use this connector I get following error:当我尝试使用此连接器时,出现以下错误:

{
  "name": "jdbc-Hdfs2-Sink-Connector",
  "connector": {
    "state": "RUNNING",
    "worker_id": "192.168.1.153:8083"
  },
  "tasks": [
    {
      "id": 0,
      "state": "FAILED",
      "worker_id": "192.168.1.153:8083",
      "trace": "org.apache.kafka.connect.errors.ConnectException: org.apache.hadoop.security.AccessControlException: Permission denied: user=cp-user, access=WRITE, inode=\"/\":hadoop:supergroup:drwxr-xr-x

I have tried export HADOOP_USER_NAME=hdfs and also hadoop configuration hdfs-site.xml我试过export HADOOP_USER_NAME=hdfs和 hadoop 配置 hdfs-site.xml

<property>
   <name>dfs.permissions</name>
   <value>false</value>
</property>

But I want a solution without compromising security.但我想要一个不影响安全性的解决方案。

cp-user is the name of my confluent platform user... Both the confluent and hdfs are on different VMs cp-user是我的 confluent 平台用户的名字... confluent 和 hdfs 都在不同的虚拟机上

Thanks in advance....提前致谢....

Your user: user= cp-user ,您的用户: user= cp-user

Is trying to access=WRITE ,正在尝试access=WRITE

To the location inode=\\"/\\"到位置inode=\\"/\\"

Which has user/group ownership of hadoop:supergroup:drwxr-xr-x其中拥有hadoop:supergroup:drwxr-xr-x 的用户/组所有权


Possible Solutions (non-overlapping):可能的解决方案(非重叠):

  1. Change cp-user to hadoop (I assume you are using the Docker container? If so, refer to the user directive of Docker . Otherwise, export HADOOP_USER_NAME=hadoop )cp-user更改为hadoop (我假设您使用的是 Docker 容器?如果是这样,请参阅Dockeruser指令。否则, export HADOOP_USER_NAME=hadoop
  2. Create and add cp-user Unix account to the NameNodes of the Hadoop cluster and all datanodes创建cp-user Unix账号并将其添加到Hadoop集群的NameNodes和所有datanodes
  3. Use Kerberos使用 Kerberos

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM