简体   繁体   English

通过Spark访问HBase表

[英]Accessing HBase tables through Spark

I am using this code-example http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to read a hbase table using Spark with the only change of adding the hbase.zookeeper.quorum through code as it is not picking it from the hbase-site.xml. 我正在使用这个代码示例http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase来读取hbase使用Spark的表只有通过代码添加hbase.zookeeper.quorum的唯一更改,因为它没有从hbase-site.xml中选择它。

Spark 1.5.3 HBase 0.98.0 Spark 1.5.3 HBase 0.98.0

I am facing this error - 我正面临着这个错误 -

java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
at org.apache.hadoop.hbase.protobuf.RequestConverter.buildRegionSpecifier(RequestConverter.java:921)
at org.apache.hadoop.hbase.protobuf.RequestConverter.buildGetRowOrBeforeRequest(RequestConverter.java:132)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1520)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1294)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
at test.MyHBase.getTable(MyHBase.scala:33)
at test.MyHBase.<init>(MyHBase.scala:11)
at $line43.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.fetch(<console>:30)
at $line44.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(<console>:49)
at $line44.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(<console>:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
at scala.collection.AbstractIterator.to(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

This is an HBase issue tracked and fixed in HBASE-10304 . 这是在HBASE-10304中跟踪和修复的HBase问题。 The problem is that the HBaseZeroCopyByteString class is declared to be in the Protobuf library, but it's in a different jar file. 问题是HBaseZeroCopyByteString类声明在Protobuf库中,但它在不同的jar文件中。 In the end a different classloader loads it and cannot find the superclass declaration. 最后,一个不同的类加载器加载它,无法找到超类声明。 It is fixed in HBase 0.99. 它固定在HBase 0.99。

I think a workaround may be to make sure you include with the jars you submit to Spark the jars that contain com.google.protobuf.LiteralByteString and com.google.protobuf.HBaseZeroCopyByteString . 我认为解决方法可能是确保您将包含com.google.protobuf.LiteralByteStringcom.google.protobuf.HBaseZeroCopyByteString的jar文件包含在Spark中。

In the end you should really upgrade. 最后你应该真的升级。 Can you imagine the list of bugs that have been fixed since 0.98? 你能想象从0.98开始修复的错误列表吗? Do you plan to hit them all and work around them one by one? 你是打算打到它们并一个一个地解决它们吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM