简体   繁体   English

用于安全 Hbase 的 Java 客户端

[英]Java Client For Secure Hbase

Hi I am trying to write a java client for secure hbase.嗨,我正在尝试为安全的 hbase 编写一个 java 客户端。 I want to do kinit also from code itself for that i`m using the usergroup information class.我也想从代码本身做 kinit ,因为我正在使用用户组信息类。 Can anyone point out where am I going wrong here?谁能指出我哪里出错了?

this is the main method that Im trying to connect o hbase from.这是我尝试连接 o hbase 的主要方法。

I have to add the configuration in the CONfiguration object rather than using the xml, because the client can be located anywhere.我必须在 CONfiguration 对象中添加配置而不是使用 xml,因为客户端可以位于任何地方。

Please see the code below:请看下面的代码:

    public static void main(String [] args) {
    try {
        System.setProperty(CommonConstants.KRB_REALM, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF, "krb.realm"));
        System.setProperty(CommonConstants.KRB_KDC, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF,"krb.kdc"));
        System.setProperty(CommonConstants.KRB_DEBUG, "true");

        final Configuration config = HBaseConfiguration.create();

        config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, AUTH_KRB);
        config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION, AUTHORIZATION);
        config.set(CommonConfigurationKeysPublic.FS_AUTOMATIC_CLOSE_KEY, AUTO_CLOSE);
        config.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, defaultFS);
        config.set("hbase.zookeeper.quorum", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.host"));
        config.set("hbase.zookeeper.property.clientPort", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.port"));
        config.set("hbase.client.retries.number", Integer.toString(0));
        config.set("zookeeper.session.timeout", Integer.toString(6000));
        config.set("zookeeper.recovery.retry", Integer.toString(0));
        config.set("hbase.master", "gauravt-namenode.pbi.global.pvt:60000");
        config.set("zookeeper.znode.parent", "/hbase-secure");
        config.set("hbase.rpc.engine", "org.apache.hadoop.hbase.ipc.SecureRpcEngine");
        config.set("hbase.security.authentication", AUTH_KRB);
        config.set("hbase.security.authorization", AUTHORIZATION);
        config.set("hbase.master.kerberos.principal", "hbase/gauravt-namenode.pbi.global.pvt@pbi.global.pvt");
        config.set("hbase.master.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
        config.set("hbase.regionserver.kerberos.principal", "hbase/gauravt-datanode2.pbi.global.pvt@pbi.global.pvt");
        config.set("hbase.regionserver.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");

        UserGroupInformation.setConfiguration(config);
        UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase/gauravt-datanode2.pbi.global.pvt@pbi.global.pvt", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
        UserGroupInformation.setLoginUser(userGroupInformation);

        User user = User.create(userGroupInformation);

        user.runAs(new PrivilegedExceptionAction<Object>() {

            @Override
            public Object run() throws Exception {
                HBaseAdmin admins = new HBaseAdmin(config);

                if(admins.isTableAvailable("ambarismoketest")) {
                    System.out.println("Table is available");
                };

                HConnection connection = HConnectionManager.createConnection(config);

                HTableInterface table = connection.getTable("ambarismoketest");



                admins.close();
                System.out.println(table.get(new Get(null)));
                return table.get(new Get(null));
            }
        });
        System.out.println(UserGroupInformation.getLoginUser().getUserName());


    } catch (Exception e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

I`m getting the following exception:我收到以下异常:

    Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:110)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:146)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:762)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:354)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:883)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:880)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
... 33 more

Any pointers would be helpful.任何指针都会有所帮助。

The above works nicely, but I've seen a lot of folks struggle with setting all of the right properties in the Configuration object.上面的效果很好,但我看到很多人都在努力在 Configuration 对象中设置所有正确的属性。 There's no de-facto list that I've found of exactly what you need and don't need and it is painfully dependent on your cluster configuration.我没有找到关于您需要什么和不需要什么的事实上的列表,它非常依赖于您的集群配置。

The surefire way is to have a copy of your HBase configurations in your classpath, since your client can be anywhere as you mentioned.万无一失的方法是在您的类路径中拥有一份 HBase 配置的副本,因为您的客户端可以在您提到的任何地方。 Then you can add the resources to your object without having to specify all properties.然后,您可以将资源添加到您的对象中,而无需指定所有属性。

Configuration conf = HBaseConfiguration.create();
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");

Here were some sources to back this approach: IBM , Scalding (Scala)以下是支持这种方法的一些来源: IBMScalding (Scala)

Also note that this approach doesn't limit you to actually use the internal Zookeeper principal and keytab, ie you can create keytabs for applications or Active Directory users and leave the internally generated keytabs for the daemons to authenticate amongst themselves.另请注意,此方法不限制您实际使用内部 Zookeeper 主体和密钥表,即您可以为应用程序或 Active Directory 用户创建密钥表,并保留内部生成的密钥表供守护程序在它们之间进行身份验证。

Not sure if you still need help.不确定您是否还需要帮助。 I think setting the "hadoop.security.authentication" property is missing from your snippet.我认为您的代码段中缺少设置“hadoop.security.authentication”属性。

I am using following code snippet to connect to secure HBase (on CDH5).我正在使用以下代码片段连接到安全的 HBase(在 CDH5 上)。 You can give a try.你可以试试。

config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);

UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);

HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables  = admins.listTableNames();

for(TableName table: tables){
    System.out.println(table.toString());
}
in Jdk 1.8, you need set 
"System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");"

config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);

HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables  = admins.listTableNames();

for(TableName table: tables){
    System.out.println(table.toString());
}

quote:
http://hbase.apache.org/book.html#trouble.client 
question: 142.9 

I think the best is https://scalding.io/2015/02/making-your-hbase-client-work-in-a-kerberized-environment/我认为最好的是https://scalding.io/2015/02/making-your-hbase-client-work-in-a-kerberized-environment/

To make the code work you don't have to change any line from the one written in the top of this post, you just have to make your client able to access the full HBase configuration.为了使代码正常工作,您不必更改本文顶部所写的任何一行,您只需要让您的客户端能够访问完整的 HBase 配置。 This just implies to change your running classpath to:这只是意味着将您正在运行的类路径更改为:

/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hbase/conf:target/scala-2.11/hbase-assembly-1.0.jar This will make everything run smoothly. /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hbase/conf:target/scala-2.11/hbase-assembly-1.0.jar 这将使一切顺利运行。 It is specific for CDH 5.3 but you can adapt it for your cluster configuration.它特定于 CDH 5.3,但您可以针对您的集群配置进行调整。

PS No need in this: PS 不需要:

conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");

Because HBaseConfiguration has因为 HBaseConfiguration 有

  public static Configuration addHbaseResources(Configuration conf) {
conf.addResource("hbase-default.xml");
conf.addResource("hbase-site.xml");

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM