繁体   English   中英

Spark-SQL 数据帧计数给出 java.lang.ArrayIndexOutOfBoundsException

[英]Spark-SQL dataframe count gives java.lang.ArrayIndexOutOfBoundsException

我正在使用 Apache Spark 2.3.1 版创建数据帧。 当我尝试计算数据帧时,出现以下错误:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 12, analitik11.{hostname}, executor 1): java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
  at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
  at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
  ... 49 elided
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
  at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

我们使用com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder连接到 Hive 并从 Hive 读取表。 生成数据帧的代码如下:

    val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(spark).build() 

    val edgesTest = hive.executeQuery("select trim(s_vno) as src ,trim(a_vno) as dst, share, administrator, account, all_share " +
      "from ebyn.babs_edges_2018 where (share <> 0 or administrator <> 0 or account <> 0 or all_share <> 0) and trim(date) = '201801'")

    val share_org_edges = edgesTest.alias("df1").
                                             join(edgesTest.alias("df2"), "src").
                                             where("df1.dst <> df2.dst").
                                             groupBy(
                                                  greatest("df1.dst", "df2.dst").as("src"), 
                                                  least("df1.dst", "df2.dst").as("dst")).
                                             agg(max("df1.share").as("share"), max("df1.administrator").as("administrator"), max("df1.account").as("account"), max("df1.all_share").as("all_share")).persist

share_org_edges.count

表属性如下:

CREATE TABLE `EBYN.BABS_EDGES_2018`(                                         
   `date` string,                                                            
   `a_vno` string,                                                            
   `s_vno` string,                                                            
   `amount` double,                                                        
   `num` int,                                                            
   `share` int,                                                               
   `share_ratio` int,                                                            
   `administrator` int,                                                            
   `account` int,                                                            
   `share-all` int)                                                        
 COMMENT 'Imported by sqoop on 2018/10/11 11:10:16'                           
 ROW FORMAT SERDE                                                             
   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'                       
 WITH SERDEPROPERTIES (                                                       
   'field.delim'='',                                                         
   'line.delim'='\n',                                                         
   'serialization.format'='')                                                
 STORED AS INPUTFORMAT                                                        
   'org.apache.hadoop.mapred.TextInputFormat'                                 
 OUTPUTFORMAT                                                                 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'               
 LOCATION                                                                     
   'hdfs://ggmprod/warehouse/tablespace/managed/hive/ebyn.db/babs_edges_2018' 
 TBLPROPERTIES (                                                              
   'bucketing_version'='2',                                                   
   'transactional'='true',                                                    
   'transactional_properties'='insert_only',                                  
   'transient_lastDdlTime'='1539245438')                            

问题
edgesTest是一个具有逻辑计划的数据edgesTest ,其中包含唯一的DataSourceV2Relation节点。 DataSourceV2Relation逻辑计划节点包含将用于读取 Hive 表的可变HiveWarehouseDataSourceReader edgesTest数据帧被使用两次:作为df1和作为df2
在 Spark 逻辑计划优化期间,列修剪在同一个HiveWarehouseDataSourceReader可变实例上发生了两次。 第二列修剪通过设置自己所需的列来覆盖第一列。
在执行期间,读取器将向 Hive 仓库发出两次相同的查询,其中包含第二列修剪所需的列。 Spark 生成的代码不会从 Hive 查询结果中找到预期的列。

解决方案
火花2.4
DataSourceV2已得到改进,尤其是SPARK-23203 DataSourceV2 应该使用不可变树

火花2.3
HiveWarehouseConnector数据源读取器中禁用列修剪。

Hortonworks 已修复此问题,如HDP 3.1.5 发行说明 所述
我们可以在其HiveWarehouseConnector github 存储库中找到更正:

    if (useSpark23xReader) {
      LOG.info("Using reader HiveWarehouseDataSourceReaderForSpark23x with column pruning disabled");
      return new HiveWarehouseDataSourceReaderForSpark23x(params);
    } else if (disablePruningPushdown) {
      LOG.info("Using reader HiveWarehouseDataSourceReader with column pruning and filter pushdown disabled");
      return new HiveWarehouseDataSourceReader(params);
    } else {
      LOG.info("Using reader PrunedFilteredHiveWarehouseDataSourceReader");
      return new PrunedFilteredHiveWarehouseDataSourceReader(params);
    }

此外, HDP 3.1.5 Hive 集成文档指定:

为防止此版本中出现数据正确性问题,默认情况下禁用修剪和投影下推。
...
为了防止这些问题并确保正确的结果,请勿启用修剪和下推。

我遇到了同样的问题,即使在禁用数据修剪/下推后,它也不起作用..

该文档位于https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/integrating-hive/content/hive-read-write-operations.html下的修剪和下推

在 Python 中我设置: spark.conf.set('spark.datasource.hive.warehouse.disable.pruning.and.pushdowns', 'true')

但这不起作用。 相反,我找到了一种解决方案/变通方法,即保留其中一个表(被确定为有问题)。

df1 = df.filter(xx).join(xx) .persist()

我猜从文档中,spark 会进行项目下推以找到父数据帧 - 加入同一数据帧的 df 时会发生此错误,有人可以解释一下吗?

另外,让我知道它是否有效

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM