繁体   English   中英

使用 java 客户端连接到安全的 Hbase

[英]Connect with a secured Hbase using a java client

我正在尝试使用 kerberos 连接安全的 hbase。 它是一个部署到 hdp3 集群中的 hbase。 确切地说,我正在尝试从集群外部的主机访问 java 客户端。

这是我的代码:

System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
        System.setProperty("sun.security.krb5.debug", "true");
        System.setProperty("java.security.debug", "gssloginconfig,configfile,configparser,logincontext");
        System.setProperty("java.security.auth.login.config", "hbase.conf");

        Configuration conf = HBaseConfiguration.create();

        String principal="user@REALM";
        File keytab = new File("/home/user/user.keytab");

        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab.getAbsolutePath());

        ugi.doAs(new PrivilegedAction<Void>() {
            @Override
            public Void run() {

                try {

                    TableName tableName = TableName.valueOf("some_table");
                    final Connection conn = ConnectionFactory.createConnection(conf);
                    System.out.println(" go ");
                    Table table = conn.getTable(tableName);
                    Result r = table.get(new Get(Bytes.toBytes("some_key")));
                    System.out.println(r);

                } catch (IOException e) {
                    e.printStackTrace();
                }

                return null;
            }
        });

    }

这是我的 jaas 文件配置:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  useTicketCache=false
  keyTab="/home/user/user.keytab"
  principal="user@REALM";
};

所有 zookeeper 和其他配置均取自 ambari 提供的 hbase-site.xml 文件。

我没有收到任何错误,只是客户端进入了一个无限循环,其跟踪如下:

ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - 读取回复 sessionid:0x371f62d9b230-base/meta-d9b230031, 数据包: region-server serverPath:/hbase-secure/meta-region-server finished:false header:: 141,4 replyHeader:: 141,365072222881,0 request:: '/hbase-secure/meta-region-server,F response: : #ffffffff000146d61737465723a313630303019fffffff6ffffff864dffffff99ffffff85151c50425546a11a56e6f64653410ffffff947d18ffffffb0ffffffa6ffffff81ffffffc5ffffff9f2e100183,s{365072220963,365072222074,1588973398227,1589014218472,5,0,0,0,52,0,365072220963} [ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181 )] 调试 org.apache.zookeeper.ClientCnxn - 读取回复 sessionid:0x371f62d9b230031, packet:: clientPath:/hbase-secure/meta-region-server serverPath:/hbase-secure/meta-region-server finished:false Z09 9FB995346F31C749F6E40DB0F395E3Z:: 142,4 replyHeader:: 142,365072222881,0 request:: '/hbase-secure/meta-region-server,F response:: #ffffffff000146d61737465723a313630303019fffffff6ffffff864dffffff99ffffff85151c50425546a11a56e6f64653410ffffff947d18ffffffb0ffffffa6ffffff81ffffffc5ffffff9f2e100183,s{365072220963,365072222074,1588973398227,1589014218472,5,0, 0,0,52,0,365072220963} [ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x371f62d9b230031, packet :: clientPath:/hbase-secure/meta-region-server serverPath:/hbase-secure/meta-region-server finished:false header:: 143,4 replyHeader:: 143,365072222881,0 request:: '/hbase- secure/meta-region-server,F response:: #ffffffff000146d61737465723a313630303019fffffff6ffffff864dffffff99ffffff85151c50425546a11a56e6f64653410ffffff947d18ffffffb0ffffffa6ffffff81ffffffc5ffffff9f2e100183,s{365072220963,36 5072222074,1588973398227,1589014218472,5,0,0,0,52,0,365072220963}

编辑

好的,我得到了这个错误,只是我没有等待足够的时间:

Exception in thread "main" java.net.SocketTimeoutException: callTimeout=1200000, callDuration=2350283: Failed after attempts=36, exceptions:
Mon May 11 13:53:42 CEST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=70631: Call to slave-5.cluster/172.10.96.43:16020 failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] row 'tome_table,some_key,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=slave-5.cluster/172.10.96.43:16020,16020,1588595144765, seqNum=-1
 row 'row_key' on table 'some_table' at null
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:159)
    at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
    at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
    at internal.holly.devoptools.hbase.HBaseCli.main(HBaseCli.java:77)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Mon May 11 13:53:42 CEST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=70631: Call to slave-5.cluster/172.10.96.43:16020 failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] row 'some_table,some_key,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=slave-5.cluster,16020,1588595144765, seqNum=-1

    at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:298)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
    at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
    at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:856)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:759)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:745)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:716)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:594)
    at org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:72)
    at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
    ... 3 more

谢谢。

最后添加这个道具它起作用了:

        conf.set("hadoop.security.authentication", "kerberos");

这是我的最终代码:

    public static void main(String[] args) throws IOException, InterruptedException {
        System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");

        Configuration conf = HBaseConfiguration.create();
        conf.set("hadoop.security.authentication", "kerberos");

        String principal = "user@REALM";

        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, "/home/user/principal.keytab");

        Connection conn = ugi.doAs(new PrivilegedExceptionAction<Connection>() {
            @Override
            public Connection run() throws Exception {
                return ConnectionFactory.createConnection(conf);
            }
        });

        TableName tableName = TableName.valueOf("some_table");
        Table table = conn.getTable(tableName);
        Result r = table.get(new Get(Bytes.toBytes("some_key")));

        System.out.println("result: " + r);

    }

我有同样的问题; 就我而言,当我提交 spark 作业时,我在 spark-submit 命令中包含了 hadoop* 和 hbase* jars; 经过一番检查,我注意到我在我的纱线集群中包含了 hadoop* 和 hbase* jars,它们与相同版本的 hbase/hadoop 不匹配。 那些 jars 的区别很小,但它搞砸了 kerberos 身份验证

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM