简体   繁体   English

Spark 3.2.1 获取 HBase 数据不适用于 NewAPIHadoopRDD

[英]Spark 3.2.1 fetch HBase data not working with NewAPIHadoopRDD

Below is the sample code snippet that is used for data fetch from HBase.下面是用于从 HBase 获取数据的示例代码片段。 This worked fine with Spark 3.1.2.这适用于 Spark 3.1.2。 However after upgrading to Spark 3.2.1, it is not working ie returned RDD doesn't contain any value.但是升级到 Spark 3.2.1 后,它不起作用,即返回的 RDD 不包含任何值。 Also, it is not throwing any exception.此外,它不会抛出任何异常。

def getInfo(sc: SparkContext, startDate:String, cachingValue: Int, sparkLoggerParams: SparkLoggerParams, zkIP: String, zkPort: String): RDD[(String)] = {{
val scan = new Scan
    scan.addFamily("family")
    scan.addColumn("family","time")
    val rdd = getHbaseConfiguredRDDFromScan(sc, zkIP, zkPort, "myTable", scan, cachingValue, sparkLoggerParams)
    val output: RDD[(String)] = rdd.map { row =>
      (Bytes.toString(row._2.getRow))
    }
    output
  }
 
def getHbaseConfiguredRDDFromScan(sc: SparkContext, zkIP: String, zkPort: String, tableName: String,
                                    scan: Scan, cachingValue: Int, sparkLoggerParams: SparkLoggerParams): NewHadoopRDD[ImmutableBytesWritable, Result] = {
    scan.setCaching(cachingValue)
    val scanString = Base64.getEncoder.encodeToString(org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(scan).toByteArray)
    val hbaseContext = new SparkHBaseContext(zkIP, zkPort)
    val hbaseConfig = hbaseContext.getConfiguration()
    hbaseConfig.set(TableInputFormat.INPUT_TABLE, tableName)
    hbaseConfig.set(TableInputFormat.SCAN, scanString)
    sc.newAPIHadoopRDD(
      hbaseConfig,
      classOf[TableInputFormat],
      classOf[ImmutableBytesWritable], classOf[Result]
    ).asInstanceOf[NewHadoopRDD[ImmutableBytesWritable, Result]]
  }

Also, If we fetch using Scan directly without using NewAPIHadoopRDD, it works.此外,如果我们不使用 NewAPIHadoopRDD 直接使用 Scan 获取,它就可以工作。

Software versions:软件版本:

  • Spark: 3.2.1 prebuilt with user provided Apache Hadoop Spark: 3.2.1 prebuilt with user provided Apache Hadoop
  • Scala: 2.12.10 Scala:2.12.10
  • HBase: 2.4.9 HBase:2.4.9
  • Hadoop: 2.10.1 Hadoop:2.10.1

I found out the solution to this one.我找到了解决这个问题的方法。 See this upgrade guide from Spark 3.1.x to Spark 3.2.x: https://spark.apache.org/docs/latest/core-migration-guide.html请参阅从 Spark 3.1.x 到 Spark 3.2.x 的升级指南: https://spark.apache.org/docs/latest/core-migration-guide.html

Since Spark 3.2, spark.hadoopRDD.ignoreEmptySplits is set to true by default which means Spark will not create empty partitions for empty input splits.从 Spark 3.2 开始,spark.hadoopRDD.ignoreEmptySplits 默认设置为 true,这意味着 Spark 不会为空输入拆分创建空分区。 To restore the behavior before Spark 3.2, you can set spark.hadoopRDD.ignoreEmptySplits to false.要恢复 Spark 3.2 之前的行为,可以将 spark.hadoopRDD.ignoreEmptySplits 设置为 false。

It can be set like this on spark-submit:可以在 spark-submit 上这样设置:

  ./spark-submit \
  --class org.apache.hadoop.hbase.spark.example.hbasecontext.HBaseDistributedScanExample \
  --master  spark://localhost:7077 \
  --conf "spark.hadoopRDD.ignoreEmptySplits=false" \
  --jars ... \
  /tmp/hbase-spark-1.0.1-SNAPSHOT.jar YourHBaseTable

Alternatively, you can also set these globally at $SPARK_HOME/conf/spark-defaults.conf to apply for every Spark application.或者,您也可以在$SPARK_HOME/conf/spark-defaults.conf全局设置这些以应用于每个 Spark 应用程序。

spark.hadoopRDD.ignoreEmptySplits false spark.hadoopRDD.ignoreEmptySplits 错误

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM