简体   繁体   English

线程“主”java.io.IOException 中的异常:作业失败

[英]Exception in thread "main" java.io.IOException: Job failed

I am new to the Hadoop Environment, Just tried a WordCount program, but every time getting the error like我是 Hadoop 环境的新手,刚刚尝试了一个 WordCount 程序,但每次都会收到类似的错误

3732814800_0004 failed 2 times due to AM Container for appattempt_1623732814800_0004_000002 exited with exitCode: 1 Failing this attempt.Diagnostics: [2021-06-15 10:45:16.241]Exception from container-launch.由于 AM Container for appattempt_1623732814800_0004_000002 退出,3732814800_0004 失败 2 次,exitCode:1 尝试失败。诊断:[2021-06-15 10:45:16.241] 容器启动异常。 Container id: container_1623732814800_0004_02_000001 Exit code: 1 Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.容器 ID:container_1623732814800_0004_02_000001 退出代码:1 异常消息:CreateSymbolicLink 错误(1314):客户端未持有所需的权限。

Shell output: 1 file(s) moved. Shell output:已移动 1 个文件。 "Setting up env variables" "Setting up job resources" “设置环境变量” “设置作业资源”

[2021-06-15 10:45:16.244]Container exited with a non-zero exit code 1. [2021-06-15 10:45:16.245]Container exited with a non-zero exit code 1. For more detailed output, check the application tracking page: http://DESKTOP-22LEODT:8088/cluster/app/application_1623732814800_0004 Then click on links to logs of each attempt. [2021-06-15 10:45:16.244]容器以非零退出代码 1 退出。 [2021-06-15 10:45:16.245]容器以非零退出代码 1 退出。有关更详细的 output ,查看应用程序跟踪页面:http://DESKTOP-22LEODT:8088/cluster/app/application_1623732814800_0004 然后单击指向每次尝试日志的链接。 . . Failing the application.申请失败。 2021-06-15 10:45:17,267 INFO mapreduce.Job: Counters: 0 Exception in thread "main" java.io.IOException: Job failed! 2021 年 6 月 15 日 10:45:17,267 信息 mapreduce.Job:计数器:0 线程“主”中的异常 java.ZF98ED07A4D5F50D77DE1 失败!

So what I did till now I just created a Scala class for Hadoop word count program and create a jar file from that所以到目前为止我所做的只是为 Hadoop 字数统计程序创建了一个 Scala class 并创建了一个 Z68995FCBF432492D15484D204 文件 fromthatACD0

package piyush.jiwane.hadoop

import java.io.IOException
import java.util._
import scala.collection.JavaConversions._
import org.apache.hadoop.fs.Path
import org.apache.hadoop.conf._
import org.apache.hadoop.io._
import org.apache.hadoop.mapred._
import org.apache.hadoop.util._

object WordCount {
  class Map extends MapReduceBase with Mapper[LongWritable, Text, Text, IntWritable] {
    private final val one = new IntWritable(1)
    private val word = new Text()

    @throws[IOException]
    def map(key: LongWritable, value: Text, output: OutputCollector[Text, IntWritable], reporter: Reporter) {
      val line: String = value.toString
      line.split(" ").foreach { token =>
        word.set(token)
        output.collect(word, one)
      }
    }
  }

  class Reduce extends MapReduceBase with Reducer[Text, IntWritable, Text, IntWritable] {
    @throws[IOException]
    def reduce(key: Text, values: Iterator[IntWritable], output: OutputCollector[Text, IntWritable], reporter: Reporter) {
      val sum = values.toList.reduce((valueOne, valueTwo) => new IntWritable(valueOne.get() + valueTwo.get()))
      output.collect(key, new IntWritable(sum.get()))
    }
  }

  @throws[Exception]
  def main(args: Array[String]) {
    val conf: JobConf = new JobConf(this.getClass)
    conf.setJobName("WordCountScala")
    conf.setOutputKeyClass(classOf[Text])
    conf.setOutputValueClass(classOf[IntWritable])
    conf.setMapperClass(classOf[Map])
    conf.setCombinerClass(classOf[Reduce])
    conf.setReducerClass(classOf[Reduce])
    conf.setInputFormat(classOf[TextInputFormat])
    conf.setOutputFormat(classOf[TextOutputFormat[Text, IntWritable]])
    FileInputFormat.setInputPaths(conf, new Path(args(1)))
    FileOutputFormat.setOutputPath(conf, new Path(args(2)))
    JobClient.runJob(conf)
  }
}

the input file for hadoop program hadoop 程序的输入文件

11 23 45 17 23 45 88 15 24 26 85 96 44 52 10 15 55 84 58 62 78 98 84 11 23 45 17 23 45 88 15 24 26 85 96 44 52 10 15 55 84 58 62 78 98 84

For the reference i have follow the below links作为参考,我遵循以下链接

https://blog.knoldus.com/hadoop-word-count-program-in-scala/ =====&&=== https://www.logicplay.club/how-to-run-hadoop-wordcount-mapreduce-example-on-windows-10/ =====&&====== org.apache.hadoop.mapred.FileAlreadyExistsExceptionhttps://blog.knoldus.com/hadoop-word-count-program-in-scala/ =====&&=== https://www.logicplay.club/how-to-run-hadoop-wordcount -mapreduce-example-on-windows-10/ =====&&====== org.apache.hadoop.mapred.FileAlreadyExistsException

please help to undestand this thanks in advance请帮助理解这个提前谢谢

EDIT after running the program in cmd administrative mode, i got the output like this在 cmd 管理模式下运行程序后编辑,我得到了这样的 output

2021-06-15 11:42:46,421 INFO mapreduce.Job:  map 0% reduce 0% 2021-06-15 11:43:01,282 INFO mapreduce.Job: Task Id : attempt_1623737359186_0001_m_000000_0, Status : FAILED Error: java.lang.ClassNotFoundException: scala.collection.mutable.ArrayOps$ofRef
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:20)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:13)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

2021-06-15 11:43:02,430 INFO mapreduce.Job: map 50% reduce 0% 2021-06-15 11:43:13,872 INFO mapreduce.Job: Task Id : attempt_1623737359186_0001_m_000000_1, Status : FAILED Error: java.lang.ClassNotFoundException: scala.collection.mutable.ArrayOps$ofRef
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:20)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:13)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2021-06-15 11:43:27,532 INFO mapreduce.Job:  map 50% reduce 17% 2021-06-15 11:43:27,562 INFO mapreduce.Job: Task Id : attempt_1623737359186_0001_m_000000_2, Status : FAILED Error: java.lang.ClassNotFoundException: scala.collection.mutable.ArrayOps$ofRef
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:20)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:13)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

2021-06-15 11:43:28,600 INFO mapreduce.Job:  map 100% reduce 0% 2021-06-15 11:43:29,628 INFO mapreduce.Job:  map 100% reduce 100% 2021-06-15 11:43:30,669 INFO mapreduce.Job: Job job_1623737359186_0001 failed with state FAILED due to: Task failed task_1623737359186_0001_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0

2021-06-15 11:43:30,877 INFO mapreduce.Job: Counters: 43
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=236028
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=130
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=3
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
                HDFS: Number of bytes read erasure-coded=0
        Job Counters
                Failed map tasks=4
                Killed reduce tasks=1
                Launched map tasks=5
                Launched reduce tasks=1
                Other local map tasks=3
                Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=64773
                Total time spent by all reduces in occupied slots (ms)=24081
                Total time spent by all map tasks (ms)=64773
                Total time spent by all reduce tasks (ms)=24081
                Total vcore-milliseconds taken by all map tasks=64773
                Total vcore-milliseconds taken by all reduce tasks=24081
                Total megabyte-milliseconds taken by all map tasks=66327552
                Total megabyte-milliseconds taken by all reduce tasks=24658944
        Map-Reduce Framework
                Map input records=0
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=96
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=19
                CPU time spent (ms)=873
                Physical memory (bytes) snapshot=250834944
                Virtual memory (bytes) snapshot=390701056 Total committed heap usage (bytes)=240123904
                Peak Map Physical memory (bytes)=250834944
                Peak Map Virtual memory (bytes)=390774784
        File Input Format Counters
                Bytes Read=34 Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:876)
        at piyush.jiwane.hadoop.WordCount$.main(WordCount.scala:48)
        at piyush.jiwane.hadoop.WordCount.main(WordCount.scala)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

The most recent error "java.lang.ClassNotFoundException: scala.collection.mutable.ArrayOps$ofRef" comes from using an incompatible version of scala.最近的错误“java.lang.ClassNotFoundException: scala.collection.mutable.ArrayOps$ofRef”来自使用不兼容的 scala 版本。 You are using 2.13.x or newer, be sure to downgrade to 2.12.x or older.您使用的是 2.13.x 或更高版本,请务必降级到 2.12.x 或更高版本。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 线程“main”中的异常 java.io.IOException:作业失败! 在 mapreduce 中 - Exception in thread "main" java.io.IOException: Job failed! in mapreduce map reduce程序在线程“ main”中显示错误异常java.io.IOException:作业失败 - map reduce program showing error Exception in thread “main” java.io.IOException: Job failed MapReduce作业失败 - 线程“main”中的异常java.io.IOException:java.net.ConnectException - MapReduce Job Failing with — Exception in thread “main” java.io.IOException: java.net.ConnectException 线程“ main”中的异常java.io.IOException:作业中未指定输入路径 - Exception in thread “main” java.io.IOException: No input paths specified in job 线程“ main”中的异常java.io.IOException:打开作业jar时出错:hadoop中的ex.jar - Exception in thread “main” java.io.IOException: Error opening job jar: ex.jar in hadoop 索引器:java.io.IOException:作业失败 - Indexer: java.io.IOException: Job failed hadoop线程“主”中的异常java.io.IOException:方案的文件系统没有:https - hadoop Exception in thread “main” java.io.IOException: No FileSystem for scheme: https (PySpark)路径错误:线程“ main”中的异常java.io.ioexception无法运行程序“ python” - (PySpark) Pathing error: exception in thread “main” java.io.ioexception cannot run program “python” 线程“ main”中的异常java.io.IOException:访问Google电子表格时,toDerInputStream拒绝标记类型53 - Exception in thread “main” java.io.IOException: toDerInputStream rejects tag type 53 when accessing google spreadsheets Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException) for hadoop 3.1.3 - Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException) for hadoop 3.1.3
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM