簡體   English   中英

Hadoop中的簡單程序得到ClassNotFoundException

[英]Simple Program in Hadoop got ClassNotFoundException

最近,我重寫了hadoop的WordCount示例中的代碼,但是當我在虛擬機上(同時設置了hadoop和java的ubuntu服務器14.04)運行它時,我收到了ClassNotFoundException ...我已經在互聯網上發現了很多解決方案,但是它們沒有用 我能做些什么來解決這個問題? 錯誤

我的代碼是:

        package org.apache.hadoop.examples;
        import java.io.IOException;
        import java.util.StringTokenizer;
        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.fs.Path;
        import org.apache.hadoop.io.IntWritable;
        import org.apache.hadoop.io.FloatWritable;
        import org.apache.hadoop.io.Text;
        import org.apache.hadoop.mapreduce.Job;
        import org.apache.hadoop.mapreduce.Mapper;
        import org.apache.hadoop.mapreduce.Reducer;

        import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

        import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

        import org.apache.hadoop.util.GenericOptionsParser;

        public class myhadoop 
        {

            public static int total_number = 0;

            public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> 
            {
                private final static IntWritable one = new IntWritable(1);

                private Text word = new Text();



                public void map(Object key, Text value, Context context) throws IOException, InterruptedException 
                {
                    StringTokenizer itr = new StringTokenizer(value.toString());
                    while (itr.hasMoreTokens()) 
                    {

                        word.set(itr.nextToken());

                        context.write(word, one);

                        total_number = total_number + 1;

                    }

                }
            }

            public static class IntSumCombiner extends Reducer<Text,IntWritable,Text,IntWritable> {

                private IntWritable result = new IntWritable();

                public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException 
                {

                    int sum = 0;

                    for (IntWritable val : values) {

                    sum += val.get();

                    }

                    result.set(sum);

                    context.write(key, result);

                }

            }

            public static class ResultCountReducer extends Reducer<Text,IntWritable,Text,FloatWritable> {

                private FloatWritable result = new FloatWritable();

                public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException 
                {

                    int sum = 0;

                    for (IntWritable val : values) {

                    sum += val.get();

                    }
                            float frequncy = sum / total_number;

                    result.set(frequncy);

                    context.write(key, result);

                }

        }



            public static void main(String[] args) throws Exception 
            {

                Configuration conf = new Configuration();

                String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

                if (otherArgs.length != 2) 
                {

                    System.err.println("Usage: myhadoop <in> <out>");

                    System.exit(2);

                }

                Job job = new Job(conf, "myhadoop");

                job.setJarByClass(myhadoop.class);

                job.setMapperClass(TokenizerMapper.class);

                job.setCombinerClass(IntSumCombiner.class);

                job.setReducerClass(ResultCountReducer.class);

                job.setOutputKeyClass(Text.class);

                job.setOutputValueClass(FloatWritable.class);

                FileInputFormat.addInputPath(job, new Path(otherArgs[0]));

                FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

                System.exit(job.waitForCompletion(true) ? 0 : 1);
            }
        }

解決方案從注釋:刪除第一行,即包導入

'package org.apache.hadoop.examples;'

更改代碼,替換

Job.setJarByClass(),

通過

Job.setJar()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM