簡體   English   中英

java.lang.NoClassDefFoundError: org/apache/spark/TaskOutputFileAlreadyExistException

[英]java.lang.NoClassDefFoundError: org/apache/spark/TaskOutputFileAlreadyExistException

我已經閱讀了 HDFS 中的數據。 我分析了它,但是在寫的時候我得到了這個錯誤。 錯誤的繼續

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/TaskOutputFileAlreadyExistException
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:167)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:123)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:173)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:211)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:208)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:110)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:109)
    at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:828)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(SQLExecution.scala:100)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:828)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:309)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:236)
    at SparkSQL.SparkHDFS.main(SparkHDFS.java:22)

我的代碼

SparkSession sparkSession = SparkSession.builder().appName("FirstSQL").master("local").getOrCreate();

Encoder<MovieModal> movieModalEncoder = Encoders.bean(MovieModal.class);

Dataset<MovieModal> data = sparkSession.read().option("infershema",true)
                                        .option("header",true)
                                        .csv("hdfs://localhost:8020/data/ratings.csv")
                                        .as(movieModalEncoder);


Dataset<Row> groupData = data.groupBy(new Column("movieID")).count();

groupData.write().format("csv").save("hdfs://localhost:8020/var/groupData2.csv");

如果目錄已經存在,那么我們需要在寫入時提供overwrite (覆蓋現有目錄)或append (附加到目錄)作為模式

嘗試:

groupData.write().mode("overwrite").format("csv").save("hdfs://localhost:8020/var/groupData2.csv");

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM