简体   繁体   English

Hadoop中的简单程序得到ClassNotFoundException

[英]Simple Program in Hadoop got ClassNotFoundException

Recently I rewrote the code in WordCount example of hadoop, but when i run it on my virtual machine (ubuntu server 14.04 with both hadoop and java set), i got ClassNotFoundException ... I have already tired many solutions found on the Internet but they didn't work. 最近,我重写了hadoop的WordCount示例中的代码,但是当我在虚拟机上(同时设置了hadoop和java的ubuntu服务器14.04)运行它时,我收到了ClassNotFoundException ...我已经在互联网上发现了很多解决方案,但是它们没有用 Anything i can do to fix this? 我能做些什么来解决这个问题? 错误

and my code is : 我的代码是:

        package org.apache.hadoop.examples;
        import java.io.IOException;
        import java.util.StringTokenizer;
        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.fs.Path;
        import org.apache.hadoop.io.IntWritable;
        import org.apache.hadoop.io.FloatWritable;
        import org.apache.hadoop.io.Text;
        import org.apache.hadoop.mapreduce.Job;
        import org.apache.hadoop.mapreduce.Mapper;
        import org.apache.hadoop.mapreduce.Reducer;

        import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

        import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

        import org.apache.hadoop.util.GenericOptionsParser;

        public class myhadoop 
        {

            public static int total_number = 0;

            public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> 
            {
                private final static IntWritable one = new IntWritable(1);

                private Text word = new Text();



                public void map(Object key, Text value, Context context) throws IOException, InterruptedException 
                {
                    StringTokenizer itr = new StringTokenizer(value.toString());
                    while (itr.hasMoreTokens()) 
                    {

                        word.set(itr.nextToken());

                        context.write(word, one);

                        total_number = total_number + 1;

                    }

                }
            }

            public static class IntSumCombiner extends Reducer<Text,IntWritable,Text,IntWritable> {

                private IntWritable result = new IntWritable();

                public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException 
                {

                    int sum = 0;

                    for (IntWritable val : values) {

                    sum += val.get();

                    }

                    result.set(sum);

                    context.write(key, result);

                }

            }

            public static class ResultCountReducer extends Reducer<Text,IntWritable,Text,FloatWritable> {

                private FloatWritable result = new FloatWritable();

                public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException 
                {

                    int sum = 0;

                    for (IntWritable val : values) {

                    sum += val.get();

                    }
                            float frequncy = sum / total_number;

                    result.set(frequncy);

                    context.write(key, result);

                }

        }



            public static void main(String[] args) throws Exception 
            {

                Configuration conf = new Configuration();

                String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

                if (otherArgs.length != 2) 
                {

                    System.err.println("Usage: myhadoop <in> <out>");

                    System.exit(2);

                }

                Job job = new Job(conf, "myhadoop");

                job.setJarByClass(myhadoop.class);

                job.setMapperClass(TokenizerMapper.class);

                job.setCombinerClass(IntSumCombiner.class);

                job.setReducerClass(ResultCountReducer.class);

                job.setOutputKeyClass(Text.class);

                job.setOutputValueClass(FloatWritable.class);

                FileInputFormat.addInputPath(job, new Path(otherArgs[0]));

                FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

                System.exit(job.waitForCompletion(true) ? 0 : 1);
            }
        }

Solution From Comment: deleting the first line ie package import 解决方案从注释:删除第一行,即包导入

'package org.apache.hadoop.examples;'

Change in code, replace 更改代码,替换

Job.setJarByClass(),

by 通过

Job.setJar()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM