简体   繁体   English

组合器在HBase扫描mapreduce中为每个区域创建mapoutput文件

[英]Combiner creating mapoutput file per region in HBase scan mapreduce

Hi i am running an application which reads records from HBase and writes into text files . 嗨,我正在运行一个从HBase读取记录并写入文本文件的应用程序。

I have used combiner in my application and custom partitioner also. 我在应用程序和自定义分区中都使用了合并器。 I have used 41 reducer in my application because i need to create 40 reducer output file that satisfies my condition in custom partitioner. 我在应用程序中使用了41减速器,因为我需要在自定义分区程序中创建满足我的条件的40减速器输出文件。

All working fine but when i use combiner in my application it creates map output file per regions or per mapper . 一切正常,但是当我在应用程序中使用合并器时,它会按区域或每个映射器创建映射输出文件。

Foe example i have 40 regions in my application so 40 mapper getting initiated then it create 40 map-output files . 敌人的例子中,我的应用程序中有40个区域,所以启动了40个映射器,然后创建了40个映射输出文件。 But reducer is not able to combine all map-output and generate final reducer output file that will be 40 reducer output files. 但是reducer无法合并所有map-output并生成最终的reducer输出文件,该文件将是40个reducer输出文件。

Data in the files are correct but no of files has increased . 文件中的数据是正确的,但没有文件增加。

Any idea how can i get only reducer output files. 任何想法我怎么能只获得减速器输出文件。

// Reducer Class
    job.setCombinerClass(CommonReducer.class);
    job.setReducerClass(CommonReducer.class); // reducer class

below is my Job details 以下是我的工作详细信息

Submitted:  Mon Apr 10 09:42:55 CDT 2017
Started:    Mon Apr 10 09:43:03 CDT 2017
Finished:   Mon Apr 10 10:11:20 CDT 2017
Elapsed:    28mins, 17sec
Diagnostics:    
Average Map Time    6mins, 13sec
Average Shuffle Time    17mins, 56sec
Average Merge Time  0sec
Average Reduce Time     0sec 

Here is my reducer logic 这是我的减速器逻辑

import java.io.IOException;
import org.apache.log4j.Logger;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs;

public class CommonCombiner extends Reducer<NullWritable, Text, NullWritable, Text> {

    private Logger logger = Logger.getLogger(CommonCombiner.class);
    private MultipleOutputs<NullWritable, Text> multipleOutputs;
    String strName = "";
    private static final String DATA_SEPERATOR = "\\|\\!\\|";

    public void setup(Context context) {
        logger.info("Inside Combiner.");
        multipleOutputs = new MultipleOutputs<NullWritable, Text>(context);
    }

    @Override
    public void reduce(NullWritable Key, Iterable<Text> values, Context context)
            throws IOException, InterruptedException {

        for (Text value : values) {
            final String valueStr = value.toString();
            StringBuilder sb = new StringBuilder();
            if ("".equals(strName) && strName.length() == 0) {
                String[] strArrFileName = valueStr.split(DATA_SEPERATOR);
                String strFullFileName[] = strArrFileName[1].split("\\|\\^\\|");

                strName = strFullFileName[strFullFileName.length - 1];


                String strArrvalueStr[] = valueStr.split(DATA_SEPERATOR);
                if (!strArrvalueStr[0].contains(HbaseBulkLoadMapperConstants.FF_ACTION)) {
                    sb.append(strArrvalueStr[0] + "|!|");
                }
                multipleOutputs.write(NullWritable.get(), new Text(sb.toString()), strName);
                context.getCounter(Counters.FILE_DATA_COUNTER).increment(1);


            }

        }
    }


    public void cleanup(Context context) throws IOException, InterruptedException {
        multipleOutputs.close();
    }
}

I have replaced multipleOutputs.write(NullWritable.get(), new Text(sb.toString()), strName); 我已经替换了multipleOutputs.write(NullWritable.get(), new Text(sb.toString()), strName); with

context.write()

and i got the correct output . 而且我得到正确的输出。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM