簡體   English   中英

Spark 錯誤執行程序:階段 0.0 (tid 0) 中的任務 0.0 中的異常 java.lang.ArithmeticException

[英]Spark ERROR executor: Exception in task 0.0 in stage 0.0 (tid 0) java.lang.ArithmeticException

當我使用 Cassandra 3.11.9 和 Spark 3.0.1 運行應用程序 Java Web 時出現以下錯誤。

我的問題是為什么它只在部署應用程序后才發生? 在開發環境中它沒有發生。

2021-03-24 08:50:41.150 INFO 19613 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler: ShuffleMapStage 0 (collectAsList at FalhaService.java:60) failed in 7.513 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (GDBHML08 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math. toIntExact(Math.java:1011) at org.apache.spark.sql.catalyst.util.DateTimeUtils$.fromJavaDate(DateTimeUtils.scala:90) at org.apache.spark.sql.catalyst.CatalystTypeConverters$DateConverter$.toCatalystImpl( CatalystTypeConverters.scala:306)在 org.ZB6E FD606D118D0F62066E31419FF04CCZ.spark.sql.catalyst.CatalystTypeConverters$DateConverter$.toCatalystImpl(CatalystTypeConverters.scala:305) at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:107) at org.apache.spark .sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:252) at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:242) at org.apache.spark.sql.catalyst .CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.ZBA AD2C48E66FBC14C61337D0B2578221Z:107) at org.apache.spark.sql.catalyst.CatalystTypeConverters$.$anonfun$createToCatalystConverter$2(CatalystTypeConverters.scala:426) at com.datastax.spark.connector.datasource.UnsafeRowReader.read(UnsafeRowReaderFactory.scala:34 ) at com.datastax.spark.connector.datasource.UnsafeRowReader.read(UnsafeRowReaderFactory.scala:21) at com.datastax.spark.connector.datasource.CassandraPartitionReaderBase.$anonfun$getIterator$2(CassandraScanPartitionReaderFactory.scala:110) at scala.集合.Iterator$$anon$10.next(Iterator.scala:461) 在 scala.collection.Iterator$$anon$11.next(Iterator.ZBAAD2C48E60B255737 8221Z:496) at com.datastax.spark.connector.datasource.CassandraPartitionReaderBase.next(CassandraScanPartitionReaderFactory.scala:66) at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:79) at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:112) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$ anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressionsSterator.Generated.1.Generated. thKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark. shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.ZBAAD2C48E66FBC14C61337D0 B2578221Z:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler .Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally( Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent .ThreadPool Executor$Worker.run(ThreadPoolExecutor.java:624) 在 java.lang.Thread.run(Thread.Z93F725A07423FE1C889F448B33D21F46)

Driver stacktrace: 2021-03-24 08:50:41.189 INFO 19613 --- [nio-8080-exec-2] org.apache.spark.scheduler.DAGScheduler: Job 0 failed: collectAsList at FalhaService.java:60, took 8.160348 秒

此錯誤中的行代碼:

List<Row> rows = dataset.collectAsList();

代碼塊:

Dataset<Row> dataset = session.sql(sql.toString());

List<Row> rows = dataset.collectAsList();
ListIterator<Row> t = rows.listIterator();
while (t.hasNext()) {
    Row row = t.next();
    grafico = new EstGraficoRelEstTela();
    grafico.setSuperficie(row.getLong(0));
    grafico.setSubsea(row.getLong(1) + row.getLong(2));
    grafico.setNomeTipoSensor(row.getString(3));
    graficoLocalFalhas.add(grafico);
}
session.close();

謝謝,

看起來您的數據庫中有不正確的數據,有些日期字段是遙遠的未來。 如果你查看源代碼,你可以看到它首先轉換為毫秒,然后轉換為天,這種轉換溢出了 integer。 這可以解釋為什么代碼在開發環境中工作......

您可以要求管理員檢查文件中是否存在損壞的數據,例如,使用nodetool scrub命令。

PS你確定你使用的是Spark 3.0.1嗎? 錯誤中 function 的位置與 Spark 3.1.1 匹配...

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM