簡體   English   中英

使用 java 客戶端連接到安全的 Hbase

[英]Connect with a secured Hbase using a java client

我正在嘗試使用 kerberos 連接安全的 hbase。 它是一個部署到 hdp3 集群中的 hbase。 確切地說,我正在嘗試從集群外部的主機訪問 java 客戶端。

這是我的代碼:

System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
        System.setProperty("sun.security.krb5.debug", "true");
        System.setProperty("java.security.debug", "gssloginconfig,configfile,configparser,logincontext");
        System.setProperty("java.security.auth.login.config", "hbase.conf");

        Configuration conf = HBaseConfiguration.create();

        String principal="user@REALM";
        File keytab = new File("/home/user/user.keytab");

        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab.getAbsolutePath());

        ugi.doAs(new PrivilegedAction<Void>() {
            @Override
            public Void run() {

                try {

                    TableName tableName = TableName.valueOf("some_table");
                    final Connection conn = ConnectionFactory.createConnection(conf);
                    System.out.println(" go ");
                    Table table = conn.getTable(tableName);
                    Result r = table.get(new Get(Bytes.toBytes("some_key")));
                    System.out.println(r);

                } catch (IOException e) {
                    e.printStackTrace();
                }

                return null;
            }
        });

    }

這是我的 jaas 文件配置:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  useTicketCache=false
  keyTab="/home/user/user.keytab"
  principal="user@REALM";
};

所有 zookeeper 和其他配置均取自 ambari 提供的 hbase-site.xml 文件。

我沒有收到任何錯誤,只是客戶端進入了一個無限循環,其跟蹤如下:

ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - 讀取回復 sessionid:0x371f62d9b230-base/meta-d9b230031, 數據包: region-server serverPath:/hbase-secure/meta-region-server finished:false header:: 141,4 replyHeader:: 141,365072222881,0 request:: '/hbase-secure/meta-region-server,F response: : #ffffffff000146d61737465723a313630303019fffffff6ffffff864dffffff99ffffff85151c50425546a11a56e6f64653410ffffff947d18ffffffb0ffffffa6ffffff81ffffffc5ffffff9f2e100183,s{365072220963,365072222074,1588973398227,1589014218472,5,0,0,0,52,0,365072220963} [ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181 )] 調試 org.apache.zookeeper.ClientCnxn - 讀取回復 sessionid:0x371f62d9b230031, packet:: clientPath:/hbase-secure/meta-region-server serverPath:/hbase-secure/meta-region-server finished:false Z09 9FB995346F31C749F6E40DB0F395E3Z:: 142,4 replyHeader:: 142,365072222881,0 request:: '/hbase-secure/meta-region-server,F response:: #ffffffff000146d61737465723a313630303019fffffff6ffffff864dffffff99ffffff85151c50425546a11a56e6f64653410ffffff947d18ffffffb0ffffffa6ffffff81ffffffc5ffffff9f2e100183,s{365072220963,365072222074,1588973398227,1589014218472,5,0, 0,0,52,0,365072220963} [ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x371f62d9b230031, packet :: clientPath:/hbase-secure/meta-region-server serverPath:/hbase-secure/meta-region-server finished:false header:: 143,4 replyHeader:: 143,365072222881,0 request:: '/hbase- secure/meta-region-server,F response:: #ffffffff000146d61737465723a313630303019fffffff6ffffff864dffffff99ffffff85151c50425546a11a56e6f64653410ffffff947d18ffffffb0ffffffa6ffffff81ffffffc5ffffff9f2e100183,s{365072220963,36 5072222074,1588973398227,1589014218472,5,0,0,0,52,0,365072220963}

編輯

好的,我得到了這個錯誤,只是我沒有等待足夠的時間:

Exception in thread "main" java.net.SocketTimeoutException: callTimeout=1200000, callDuration=2350283: Failed after attempts=36, exceptions:
Mon May 11 13:53:42 CEST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=70631: Call to slave-5.cluster/172.10.96.43:16020 failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] row 'tome_table,some_key,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=slave-5.cluster/172.10.96.43:16020,16020,1588595144765, seqNum=-1
 row 'row_key' on table 'some_table' at null
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:159)
    at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
    at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
    at internal.holly.devoptools.hbase.HBaseCli.main(HBaseCli.java:77)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Mon May 11 13:53:42 CEST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=70631: Call to slave-5.cluster/172.10.96.43:16020 failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] row 'some_table,some_key,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=slave-5.cluster,16020,1588595144765, seqNum=-1

    at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:298)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
    at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
    at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:856)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:759)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:745)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:716)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:594)
    at org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:72)
    at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
    ... 3 more

謝謝。

最后添加這個道具它起作用了:

        conf.set("hadoop.security.authentication", "kerberos");

這是我的最終代碼:

    public static void main(String[] args) throws IOException, InterruptedException {
        System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");

        Configuration conf = HBaseConfiguration.create();
        conf.set("hadoop.security.authentication", "kerberos");

        String principal = "user@REALM";

        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, "/home/user/principal.keytab");

        Connection conn = ugi.doAs(new PrivilegedExceptionAction<Connection>() {
            @Override
            public Connection run() throws Exception {
                return ConnectionFactory.createConnection(conf);
            }
        });

        TableName tableName = TableName.valueOf("some_table");
        Table table = conn.getTable(tableName);
        Result r = table.get(new Get(Bytes.toBytes("some_key")));

        System.out.println("result: " + r);

    }

我有同樣的問題; 就我而言,當我提交 spark 作業時,我在 spark-submit 命令中包含了 hadoop* 和 hbase* jars; 經過一番檢查,我注意到我在我的紗線集群中包含了 hadoop* 和 hbase* jars,它們與相同版本的 hbase/hadoop 不匹配。 那些 jars 的區別很小,但它搞砸了 kerberos 身份驗證

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM