kafka动态权限认证(SASL SCRAM + ACL)
kafka動態權限認證(SASL SCRAM + ACL)
創建三個測試用戶
bin/kafka-configs.sh --zookeeper 192.168.x.x:2181 --alter --add-config ‘SCRAM-SHA-256=[iterations=8192,password=admin],SCRAM-SHA-512=[password=admin]’ --entity-type users --entity-name admin
PS:用戶 admin 這里配置admin用戶用于實現broker間的通訊。
測試用戶 writer
bin/kafka-configs.sh --zookeeper 192.168.x.x:2181 --alter --add-config ‘SCRAM-SHA-256=[iterations=8192,password=writer],SCRAM-SHA-512=[password=writer]’ --entity-type users --entity-name writer
測試用戶 reader
bin/kafka-configs.sh --zookeeper 192.168.x.x:2181 --alter --add-config ‘SCRAM-SHA-256=[iterations=8192,password=reader],SCRAM-SHA-512=[password=reader]’ --entity-type users --entity-name reader
查看創建用戶信息
bin/kafka-configs.sh --zookeeper 192.168.2.6:2181 --describe --entity-type users (可以單獨指定某個用戶–entity-name writer)
創建配置文件kafka-broker-jaas.conf
保存至 /opt/kafka/config 下 (每臺主機)
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username=“admin”
password=“admin”;
};
配置broker端的server.properties
啟用ACL
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#設置本例中admin為超級用戶
super.users=User:admin
啟用SCRAM機制,采用SCRAM-SHA-512算法
sasl.enabled.mechanisms=SCRAM-SHA-512
#為broker間通訊開啟SCRAM機制,采用SCRAM-SHA-512算法
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
#broker間通訊使用PLAINTEXT
security.inter.broker.protocol=SASL_PLAINTEXT
#配置listeners使用SASL_PLAINTEXT
listeners=SASL_PLAINTEXT://n6.aa-data.cn:9092(指定當前主機)
#配置advertised.listeners
advertised.listeners=SASL_PLAINTEXT://n6.aa-data.cn:9092(指定當前主機)
配置環境變量,引入jaas
export KAFKA_OPTS=’-Djava.security.auth.login.config=/opt/kafka/config/kafka-broker-jaas.conf’
生產
創建 producer.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=“writer” password=“writer”;
給writer 提供寫權限
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=192.168.2.6:2181 --add --allow-principal User:writer --operation Write --topic testcon
生產消息
bin/kafka-console-producer.sh --broker-list n6.aa-data.cn:9092 --topic testcon --producer.config /opt/kafka/config/producer.conf
消費
創建 consumer.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=“reader” password=“reader”;
給reader 提供讀取權限
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=192.168.2.6:2181 --add --allow-principal User:reader --operation Read --topic testcon
給reader 添加訪問consumer group的權限
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=n6.aa-data.cn:2181 --add --allow-principal User:reader --operation Read --group test
消費 testcon消息
bin/kafka-console-consumer.sh --bootstrap-server 192.168.2.6:9092 --topic testcon --from-beginning --consumer.config /opt/kafka/config/consumer.conf
動態添加 生成用戶
測試用戶 writer1
bin/kafka-configs.sh --zookeeper 192.168.2.6:2181 --alter --add-config ‘SCRAM-SHA-256=[iterations=8192,password=writer],SCRAM-SHA-512=[password=writer1]’ --entity-type users --entity-name writer1
測試用戶 reader1
bin/kafka-configs.sh --zookeeper 192.168.2.6:2181 --alter --add-config ‘SCRAM-SHA-256=[iterations=8192,password=reader],SCRAM-SHA-512=[password=reader1]’ --entity-type users --entity-name reader1
賦予權限
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=192.168.2.6:2181 --add --allow-principal User:writer1 --operation Write --topic testcon
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=192.168.2.6:2181 --add --allow-principal User:reader1 --operation Read --topic testcon
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=n6.aa-data.cn:2181 --add --allow-principal User:reader --operation Read --group test1
刪除原來的用戶
bin/kafka-configs.sh --zookeeper 192.168.2.6:2181 --alter --delete-config ‘SCRAM-SHA-256’ --entity-type users --entity-name writer
bin/kafka-configs.sh --zookeeper 192.168.2.6:2181 --alter --delete-config ‘SCRAM-SHA-512’ --entity-type users --entity-name writer
Ps:用戶刪除后 需修改 producer.conf consumer.conf
總結
以上是生活随笔為你收集整理的kafka动态权限认证(SASL SCRAM + ACL)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: HTML设为首页/加入收藏代码
- 下一篇: teechart for java_Te