簡體   English   中英

Spark-SQL 數據幀計數給出 java.lang.ArrayIndexOutOfBoundsException

[英]Spark-SQL dataframe count gives java.lang.ArrayIndexOutOfBoundsException

我正在使用 Apache Spark 2.3.1 版創建數據幀。 當我嘗試計算數據幀時,出現以下錯誤:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 12, analitik11.{hostname}, executor 1): java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
  at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
  at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
  ... 49 elided
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
  at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

我們使用com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder連接到 Hive 並從 Hive 讀取表。 生成數據幀的代碼如下:

    val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(spark).build() 

    val edgesTest = hive.executeQuery("select trim(s_vno) as src ,trim(a_vno) as dst, share, administrator, account, all_share " +
      "from ebyn.babs_edges_2018 where (share <> 0 or administrator <> 0 or account <> 0 or all_share <> 0) and trim(date) = '201801'")

    val share_org_edges = edgesTest.alias("df1").
                                             join(edgesTest.alias("df2"), "src").
                                             where("df1.dst <> df2.dst").
                                             groupBy(
                                                  greatest("df1.dst", "df2.dst").as("src"), 
                                                  least("df1.dst", "df2.dst").as("dst")).
                                             agg(max("df1.share").as("share"), max("df1.administrator").as("administrator"), max("df1.account").as("account"), max("df1.all_share").as("all_share")).persist

share_org_edges.count

表屬性如下:

CREATE TABLE `EBYN.BABS_EDGES_2018`(                                         
   `date` string,                                                            
   `a_vno` string,                                                            
   `s_vno` string,                                                            
   `amount` double,                                                        
   `num` int,                                                            
   `share` int,                                                               
   `share_ratio` int,                                                            
   `administrator` int,                                                            
   `account` int,                                                            
   `share-all` int)                                                        
 COMMENT 'Imported by sqoop on 2018/10/11 11:10:16'                           
 ROW FORMAT SERDE                                                             
   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'                       
 WITH SERDEPROPERTIES (                                                       
   'field.delim'='',                                                         
   'line.delim'='\n',                                                         
   'serialization.format'='')                                                
 STORED AS INPUTFORMAT                                                        
   'org.apache.hadoop.mapred.TextInputFormat'                                 
 OUTPUTFORMAT                                                                 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'               
 LOCATION                                                                     
   'hdfs://ggmprod/warehouse/tablespace/managed/hive/ebyn.db/babs_edges_2018' 
 TBLPROPERTIES (                                                              
   'bucketing_version'='2',                                                   
   'transactional'='true',                                                    
   'transactional_properties'='insert_only',                                  
   'transient_lastDdlTime'='1539245438')                            

問題
edgesTest是一個具有邏輯計划的數據edgesTest ,其中包含唯一的DataSourceV2Relation節點。 DataSourceV2Relation邏輯計划節點包含將用於讀取 Hive 表的可變HiveWarehouseDataSourceReader edgesTest數據幀被使用兩次:作為df1和作為df2
在 Spark 邏輯計划優化期間,列修剪在同一個HiveWarehouseDataSourceReader可變實例上發生了兩次。 第二列修剪通過設置自己所需的列來覆蓋第一列。
在執行期間,讀取器將向 Hive 倉庫發出兩次相同的查詢,其中包含第二列修剪所需的列。 Spark 生成的代碼不會從 Hive 查詢結果中找到預期的列。

解決方案
火花2.4
DataSourceV2已得到改進,尤其是SPARK-23203 DataSourceV2 應該使用不可變樹

火花2.3
HiveWarehouseConnector數據源讀取器中禁用列修剪。

Hortonworks 已修復此問題,如HDP 3.1.5 發行說明 所述
我們可以在其HiveWarehouseConnector github 存儲庫中找到更正:

    if (useSpark23xReader) {
      LOG.info("Using reader HiveWarehouseDataSourceReaderForSpark23x with column pruning disabled");
      return new HiveWarehouseDataSourceReaderForSpark23x(params);
    } else if (disablePruningPushdown) {
      LOG.info("Using reader HiveWarehouseDataSourceReader with column pruning and filter pushdown disabled");
      return new HiveWarehouseDataSourceReader(params);
    } else {
      LOG.info("Using reader PrunedFilteredHiveWarehouseDataSourceReader");
      return new PrunedFilteredHiveWarehouseDataSourceReader(params);
    }

此外, HDP 3.1.5 Hive 集成文檔指定:

為防止此版本中出現數據正確性問題,默認情況下禁用修剪和投影下推。
...
為了防止這些問題並確保正確的結果,請勿啟用修剪和下推。

我遇到了同樣的問題,即使在禁用數據修剪/下推后,它也不起作用..

該文檔位於https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/integrating-hive/content/hive-read-write-operations.html下的修剪和下推

在 Python 中我設置: spark.conf.set('spark.datasource.hive.warehouse.disable.pruning.and.pushdowns', 'true')

但這不起作用。 相反,我找到了一種解決方案/變通方法,即保留其中一個表(被確定為有問題)。

df1 = df.filter(xx).join(xx) .persist()

我猜從文檔中,spark 會進行項目下推以找到父數據幀 - 加入同一數據幀的 df 時會發生此錯誤,有人可以解釋一下嗎?

另外,讓我知道它是否有效

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM