简体   繁体   English

Map-Reduce作业无法传递预期的分区文件

[英]Map-Reduce job failing to deliver expected partitioned files

In a Map-Reduce job, I am using five different files where in my dataset contains values under two categories P and I . 在Map-Reduce作业中,我使用了五个不同的文件,其中我的数据集中包含两个类别PI下的值。 After I specific values are found, I am passing those into I-part-r-00000 file and accordingly, for P. I am using MultipleOutputformat class in reducer to achieve this. 找到I特定值后,我将这些值传递到I-part-r-00000文件中,并因此将其传递给P。我在reducer中使用MultipleOutputformat类来实现此目的。

My Mapper class contains: 我的Mapper类包含:

public class parserMapper extends Mapper<LongWritable, Text, Text, Text> {
   public void map(LongWritable key, Text value, Context context)
   throws IOException, InterruptedException {

   String IPFLAG = "";
   String[] element_data= value.toString.split(","); 

    if(element_data[0].toString().trim().equalsIgnoreCase("005010X222A1")){
        IPFLAG = "P"; 
     }

    else {
       IPFLAG = "I";
     }

   if (IPFLAG == "P") {
     context.write(new Text(IPFLAG), new Text(theData));
     } 

   else if (IPFLAG == "I") {
   context.write(new Text(IPFLAG), new Text(theData));
     }

   else{
   System.out.println("No category found");
     } 


  }

  public void run(Context context) throws IOException, InterruptedException {
        setup(context);
        while (context.nextKeyValue()) {
            map(context.getCurrentKey(), context.getCurrentValue(), context);
        }
        cleanup(context);
    }


}

Reducer class includes: 减速器类包括:

public class parserReducer extends Reducer<Text, Text, Text, Text> {

    private MultipleOutputs multipleOutputs;

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        multipleOutputs = new MultipleOutputs(context);
    }

    @Override
    protected void cleanup(Context context) throws IOException, InterruptedException {
        multipleOutputs.close();
    }

    @Override
    public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

        Object c = null;

        try{
            if (!(key.toString().isEmpty())) {

                for (Text value : values) {

                    multipleOutputs.write(c, value, key.toString());
                }

            }
        }
        catch(Exception e){ System.out.println("Caught Exception: " + e.getMessage());}
    }


}

and Driver code includes => 驱动程序代码包括=>

public class parserDriver {

 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    conf.set("textinputformat.record.delimiter", "~"+"\n"+"ISA*");
    Job job = new Job(conf);
        job.setJobName("PARSER");
        job.setJarByClass(parserDriver.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        job.setMapperClass(parserMapper.class);
        job.setReducerClass(parserReducer.class);
//      job.setOutputFormatClass(TextOutputFormat.class);
        LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);
    //  job.setOutputFormatClass(LazyOutputFormat.class);

/*       MultipleOutputs.addNamedOutput(job, "P", TextOutputFormat.class, Text.class, Text.class);
         MultipleOutputs.addNamedOutput(job, "I", TextOutputFormat.class, Text.class, Text.class);
*/
        // Pass as option -D mapred.reduce.tasks=<number>
        job.setNumReduceTasks(3);       

        /* This line is to accept the input recursively */
        //FileInputFormat.setInputDirRecursive(job, true);

        FileInputFormat.addInputPath(job, "/Users/Mohit/input");
        FileOutputFormat.setOutputPath(job, "/Users/Mohit/output");

        /*
         * Delete output file path if already exists
         */
        FileSystem fs = FileSystem.get(conf);

        if (fs.exists(outputFilePath)) {
            fs.delete(outputFilePath, true);
        }

        return job.waitForCompletion(true) ? 0: 1;
    }
}

Through all this, I am trying to achieve two partitions against a single file 通过所有这些,我试图针对一个文件实现两个分区

file1 -> P-part-r00000, I-part-r00001 文件1-> P部分-r00000,I部分-r00001

file2 -> P-part-r00002, I-part-r00003 文件2-> P-part-r00002,I-part-r00003

. but I am getting two partitions against all the files being fed as input to this job. 但是我得到了两个分区,所有的文件都被作为该作业的输入。

file1, file2, file3, file4, file5 -> P-part-r00000, I-part-r00001 file1,file2,file3,file4,file5-> P-part-r00000,I-part-r00001

Not sure what am I missing here, if anybody can help please? 不知道我在这里想念的是什么,如果有人可以帮忙吗?

1) In your Driver add these lines to file naming: 1)在驱动程序中,将以下几行添加到文件命名中:

   job.setOutputFormatClass(TextOutputFormat.class);
   MultipleOutputs.addNamedOutput(job, "I", TextOutputFormat.class,
          Text.class, Text.class);
   MultipleOutputs.addNamedOutput(job, "P", TextOutputFormat.class,
          Text.class, Text.class);

2) Change your reducer to send each value to file with specific name: 2)更改您的reducer以将每个值发送到具有特定名称的文件:

@Override
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
    try{
        if (!(key.toString().isEmpty())) {

            for (Text value : values) {

                multipleOutputs.write(key.toString(), key, value);
            }

        }
    }
    catch(Exception e){ System.out.println("Caught Exception: " + e.getMessage());}
}

3) change number of reducers to 2 to get exactly 2 files. 3)将reducer的数量更改为2,以得到2个文件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM