简体   繁体   English

控制不会交给hadoop中的reducer

[英]Control is not going to the reducer in hadoop

I have written a custom inputformat and data type in hadoop, which can read images, store it into RGB array. 我在hadoop中编写了一个自定义的输入格式和数据类型,它可以读取图像并将其存储到RGB数组中。 but when I implement in my map and reduce function, the control does not go to the reducer function. 但是,当我在地图中实现reduce函数时,控件不会转到reducer函数。

import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class Image {

    public static class Map extends Mapper<Text, ImageM, Text, ImageM> {

        public void map(Text key, ImageM value, Context context) throws IOException,     
        InterruptedException {
          /*
           for(int i=0;i<value.Height;i++)
           {
               System.out.println();
               for(int j=0;j<value.Width;j++)
               {
                   System.out.print(" "+value.Blue[i][j]);
               }
           }       
           */
           context.write(key, value);


        } 
    }

    public static class Reduce extends Reducer<Text, ImageM, Text, IntWritable> {

        public void reduce(Text key, ImageM value, Context context) 
         throws IOException, InterruptedException {

           for(int i=0;i<value.Height;i++)
           {
               System.out.println();
               for(int j=0;j<value.Width;j++)
               {
                   System.out.print(value.Blue[i][j]+" ");
               }
           }
           IntWritable m = new IntWritable(10);
           context.write(key, m);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();

        Job job = new Job(conf, "wordcount");

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(ImageM.class);

        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);

        job.setInputFormatClass(ImageFileInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        long start = new Date().getTime();    
        job.waitForCompletion(true);
        long end = new Date().getTime();
        System.out.println("Job took "+(end-start) + " milliseconds");
    }

}

Here the key in the map function gives the file name according to the input format. 此处,映射功能中的键根据输入格式给出文件名。

I get the output as "icon2.gif ImageM@31093d14" 我得到的输出为“ icon2.gif ImageM @ 31093d14”

Every thing is fine if my data type is used only in the mapper. 如果仅在映射器中使用我的数据类型,那么一切都很好。 Can u guess where is the problem? 你能猜出问题出在哪里吗?

Your reduce function signature is wrong. 您的reduce函数签名错误。 It should be: 它应该是:

@Override
public void reduce(Text key, Iterable<ImageM> values, Context context) 
     throws IOException, InterruptedException

Please use the @Override annotation to let the compiler spot this error for you. 请使用@Override批注使编译器为您发现此错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM