简体   繁体   English

kafka SASL/SCRAM 身份验证失败

[英]kafka SASL/SCRAM Failed authentication

I tried to add security to my kafka cluster, I followed the documentation:我试图为我的 kafka 集群添加安全性,我遵循了文档:

I add the user using this:我使用这个添加用户:

kafka-configs.sh --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

I modify the server.properties:我修改 server.properties:

broker.id=1
listeners=SASL_PLAINTEXT://kafka1:9092
advertised.listeners=SASL_PLAINTEXT://kafka1:9092
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
default.replication.factor=3
min.insync.replicas=2
log.dirs=/var/lib/kafka
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

Created the jaas file:创建了 jaas 文件:

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-secret"
};

Created the file kafka_opts.sh in /etc/profile.d:在 /etc/profile.d 中创建了文件 kafka_opts.sh:

export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf

But when I start kafka it throws the following error:但是当我启动 kafka 时,它会引发以下错误:

[2020-05-04 10:54:08,782] INFO [Controller id=1, targetBrokerId=1] Failed authentication with kafka1/kafka1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)

I use instead of kafka1,kafka2,kafka3,zookeeper1,zookeeper2 and zookeeper3 the respectively ip of every server, can someone help me with my issue?我使用每个服务器的 ip 代替 kafka1、kafka2、kafka3、zookeeper1、zookeeper2 和 zookeeper3,有人可以帮我解决我的问题吗?

My main problem was this configuration:我的主要问题是这个配置:

zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka

This configuration in the server.properties was needed to have order in the way zookeeper create the kafka information, but that affects the way I need to execute the command kafka-configs.sh , so I will explain the steps I needed to followed server.properties 中的此配置需要在 zookeeper 创建 kafka 信息的方式中具有顺序,但这会影响我需要执行命令kafka-configs.sh的方式,所以我将解释我需要遵循的步骤

  1. First modify zookeeper.首先修改zookeeper。

I have downloaded zookeeper from the official site https://zookeeper.apache.org/releases.html我已经从官方网站https://zookeeper.apache.org/releases.html下载了zookeeper

I modified the zoo.cfg file and added the configuration for the security:我修改了 zoo.cfg 文件并添加了安全配置:

tickTime=2000
dataDir=/var/lib/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl

I create the jaas file for zookeeper:我为 zookeeper 创建了 jaas 文件:

Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    user_admin="admin_secret";
};

I create the file java.env on /conf/ and added the following:我在 /conf/ 上创建文件 java.env 并添加以下内容:

SERVER_JVMFLAGS="-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf"

With this files you are telling zookeeper to use the jaas file to let kafka authenticate to zookeeper, to validate that zookeeper is taking the file you only need to run:使用这些文件,您告诉 zookeeper 使用 jaas 文件让 kafka 向 zookeeper 进行身份验证,以验证 zookeeper 正在获取您只需要运行的文件:

zkServer.sh print-cmd

it will respond:它会回应:

/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg
"java"  -Dzookeeper.log.dir="/opt/apache-zookeeper-3.6.0-bin/bin/../logs" ........-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf....... "/opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg" > "/opt/apache-zookeeper-3.6.0-bin/bin/../logs/zookeeper.out" 2>&1 < /dev/null
  1. Modify kafka修改kafka

I have downloaded kafka from the official site https://www.apache.org/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz我已经从官网https://www.apache.org/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz下载了kafka

I modifed/added the following configuration in the server.properties file:我在 server.properties 文件中修改/添加了以下配置:

listeners=SASL_PLAINTEXT://kafka1:9092
advertised.listeners=SASL_PLAINTEXT://kafka1:9092
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:admin

I created the jaas file for kafka:我为 kafka 创建了 jaas 文件:

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin_secret";
};
Client {
   org.apache.zookeeper.server.auth.DigestLoginModule required
   username="admin"
   password="admin_secret";
};

One important thing you need to understand, the Client part needs to be the same as the jaas file in zookeeper and the KafkaServer part is for interbroker communication.您需要了解的一件重要事情,Client 部分需要与 zookeeper 中的 jaas 文件相同,而 KafkaServer 部分用于代理间通信。

Also I need to tell kafka to use the jaas file, this can be done by setting the variable KAFKA_OPTS:我还需要告诉 kafka 使用 jaas 文件,这可以通过设置变量 KAFKA_OPTS 来完成:

export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf
  1. Create the user admin for kafka brokers为 kafka 代理创建用户 admin

Run the following command:运行以下命令:

kafka-configs.sh --zookeeper zookeeper:2181/kafka --alter --add-config 'SCRAM-SHA-256=[password=admin_secret]' --entity-type users --entity-name admin

As I mentioned before my error was that I was't adding the /kafka part to the zookeeper ip(note that everything that uses zookeeper will needs to add the /kafka part at the end of the ip), now if you start zookeeper and kafka everything is going to work great.正如我之前提到的,我的错误是我没有将 /kafka 部分添加到 zookeeper ip(请注意,使用 zookeeper 的所有内容都需要在 ip 末尾添加 /kafka 部分),现在如果你启动 zookeeper 并且卡夫卡一切都会很好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 KAFKA:连接到节点的身份验证失败,原因是:由于使用 SASL 机制 SCRAM-SHA-256 的凭据无效,身份验证失败 - KAFKA: Connection to node failed authentication due to: Authentication failed due to invalid credentials with SASL mechanism SCRAM-SHA-256 带有 SSL 选项的 Kafka SASL/Plain 和带有 SSL 的 Kafka SASL/Scram - Kafka SASL/Plain with SSL options and Kafka SASL/Scram with SSL Kafka Security 实现问题 SASL SSL 和 SCRAM - Kafka Security implementation issue SASL SSL and SCRAM 启用Kerberos“ SASL身份验证失败”后无法启动Kafka - Failed to start Kafka after enabling Kerberos “SASL authentication failed” Kafka SASL 动物园管理员身份验证 - Kafka SASL zookeeper authentication 在 Windows 上的 kafka 中设置 SASL/SCRAM 时出错 - Error while setting up SASL/SCRAM in kafka on windows kafka SASL_PLAIN SCRAM 在 spring 引导消费者中失败 - kafka SASL_PLAIN SCRAM is fail in spring boot consumer docker zookeeper 和 kafka 中的 SASL 身份验证 - SASL authentication in docker zookeeper and kafka 使用 Kafka SASL/Plain 身份验证的 TopicAuthorizationException - TopicAuthorizationException with Kafka SASL/Plain authentication 带有 SASL-SSL-security-protocol (SCRAM-SHA-512) 的 Mule 4 Kafka 连接器 - Mule 4 Kafka connector with SASL-SSL-security-protocol (SCRAM-SHA-512)
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM