简体   繁体   English

线程“main”中的异常 java.lang.ClassNotFoundException,mapreduce

[英]Exception in thread "main" java.lang.ClassNotFoundException, mapreduce

I'm new to both java and hadoop. Thank you for any help.我是 java 和 hadoop 的新手。感谢您的帮助。 I try to do a join operations on two tables.我尝试对两个表进行连接操作。 ValueWrapper is a customized type using Writable interface, I also put it in the stdRepartition package.I use command line to run it. ValueWrapper是使用Writable接口的自定义类型,我也把它放在stdRepartition package中。我使用命令行运行它。 The process and result showed below:过程和结果如下图:

Results:结果:

javac StdRepartition.java ValueWrapper.java
jar -cvf StdRepartition.jar ./*.class
added manifest
adding:StdRepartition
adding:StdRepartition$DataMapper.class
adding:StdRepartition$StdReducer.class
adding:ValueWrapper.class

hadoop jar StdRepartition.jar stdRepartition.StdRepartition input output
Exception in thread "main" java.lang.ClassNotFoundException: stdRepartition.StdRepartition
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:205)

Code:代码:

    package stdRepartition;

    import java.io.IOException;
    import java.util.ArrayList;

    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.NullWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapred.FileSplit;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    // import org.apache.hadoop.mapreduce.lib.input.MultipleInputs;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

    public class StdRepartition {

        public static class DataMapper extends Mapper<Object, Text, IntWritable, ValueWrapper> {
            private Text flag = new Text();
            private Text content = new Text();
            private ValueWrapper valueWrapper = new ValueWrapper();
            public void map(Object key, Text value, Context context) throws IOException, InterruptedException{
                FileSplit fileSplit = (FileSplit)context.getInputSplit();
                String filename = fileSplit.getPath().getName();
                int ID;
                if(filename.endsWith("data.txt")) {
                    String[] parts = value.toString().split("s+");
                    ID = Integer.parseInt(parts[0]);
                    flag = new Text("data");
                    content = value;
                }
                else {
                    String[] parts = value.toString().split("\\|");
                    ID = Integer.parseInt(parts[0]);
                    flag = new Text("user");
                    content = new Text(parts[2]);
                }
                valueWrapper.setFlag(flag);
                valueWrapper.setContent(content);
                context.write(new IntWritable(ID), valueWrapper);
            }
        }

        public static class StdReducer extends Reducer<IntWritable, ValueWrapper, NullWritable, Text> {
            private ArrayList<Text> ratings = new ArrayList<Text>();
            private Text age = new Text();
            public void reduce(IntWritable key, Iterable<ValueWrapper> value, Context context) throws IOException, InterruptedException {
                for(ValueWrapper val: value) {
                    Text flag = val.getFlag();
                    if(flag.toString().equals("user")) {
                        age = val.getContent();
                    }
                    else {
                        ratings.add(val.getContent());
                    }
                }

                String curAge = age.toString();
                for(Text r: ratings) {
                    String curR = r.toString();
                    curR = curR + "    " + curAge;
                    context.write(NullWritable.get(), new Text(curR));
                }
            }
        }

        @SuppressWarnings("deprecation")
        public static void main(String[] args) throws Exception {
            Configuration conf = new Configuration();
            Job job = new Job(conf, "StdRepartition");
            job.setJarByClass(StdRepartition.class);

            job.setMapperClass(DataMapper.class);
            job.setMapOutputKeyClass(IntWritable.class);
            job.setMapOutputValueClass(ValueWrapper.class);
            job.setReducerClass(StdReducer.class);
            job.setOutputKeyClass(NullWritable.class);
            job.setOutputValueClass(Text.class);
            // MultipleInputs.addInputPath(job, new Path(args[0]), TextInputFormat.class, DataMapper.class);
            // MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class, DataMapper.class);

            // Set the input path to be a directory
            FileInputFormat.setInputPaths(job, args[0]);
            FileOutputFormat.setOutputPath(job, new Path(args[1]));

            System.exit(job.waitForCompletion(true)? 0:1);
        }
    }

I know the reason. 我知道原因 need to package .class files outside the directory to package jar. 需要将目录之外的.class文件打包以打包jar。 Thanks for all the help. 感谢您的所有帮助。 I also leant how to edit post here. 我也在这里学习如何编辑帖子。

To execute a map reduce program you have to perform the below steps - 要执行地图缩小程序,您必须执行以下步骤-

  1. Create jar file of your map reduce program(You have already done). 创建您的map reduce程序的jar文件(您已经完成)。
  2. Place the input file to HDFS/file system. 将输入文件放置到HDFS /文件系统中。

and finally execute the below command. 最后执行以下命令。

hadoop jar [jar file name with fully qualified] [driver class name with fully qualified] /[input path] /[output path]

Here is the very simple and basic Hello World in Map Reduce step by step guide . 这是Map Reduce中非常简单且基本的Hello World逐步指南

You must use the full qualified name, but without specifying the extension like '.class' or '.java'您必须使用完整的限定名称,但不指定扩展名,如“.class”或“.java”

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 线程“main”java.lang.ClassNotFoundException中的异常: - Exception in thread “main” java.lang.ClassNotFoundException: 线程“主”中的异常java.lang.ClassNotFoundException? - Exception in thread “main” java.lang.ClassNotFoundException? Class.forName 中的线程“main”java.lang.ClassNotFoundException 中的异常 - Exception in thread "main" java.lang.ClassNotFoundException in Class.forName 线程“主”中的异常java.lang.ClassNotFoundException:TrackPlayer.MainTrack - Exception in thread “main” java.lang.ClassNotFoundException: TrackPlayer.MainTrack 线程“主”中的异常java.lang.ClassNotFoundException:MaxTemperature - Exception in thread “main” java.lang.ClassNotFoundException:MaxTemperature 线程“main”中的异常java.lang.ClassNotFoundException:WordCount - Exception in thread “main” java.lang.ClassNotFoundException: WordCount Docker 错误:线程“主”java.lang.ClassNotFoundException 中的异常 - Docker error: Exception in thread "main" java.lang.ClassNotFoundException 线程“主”中的异常java.lang.NoClassDefFoundError:MyFile原因:java.lang.ClassNotFoundException: - Exception in thread “main” java.lang.NoClassDefFoundError: MyFile Caused by: java.lang.ClassNotFoundException: scala mapreduce异常:java.lang.ClassNotFoundException:scala.Function2 - scala mapreduce exception: java.lang.ClassNotFoundException: scala.Function2 线程“ main”中的异常java.lang.ClassNotFoundException:sample.Main-为什么? - Exception in thread “main” java.lang.ClassNotFoundException: sample.Main - why?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM