简体   繁体   中英

MapReduce Apache Hadoop Technology

I am trying to wordcount program using the MapReduce Hadoop technology. What I need to do is develop an Indexed Word Count application that will count the number of occurences of each word in each file in a given input file set. This file set is present in the Amazon S3 bucket. It will also count the total occurences of each word. I have attached the code that counts the occurences of the words in the given file set. After this I need to print that which word is occuring in which file with the number of occurrences of the word in that particular file.

I know its a bit complex but any would be appreciated.

Map.java

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    private String pattern= "^[a-z][a-z0-9]*$";

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        InputSplit inputSplit = context.getInputSplit();
        String fileName = ((FileSplit) inputSplit).getPath().getName();
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            String stringWord = word.toString().toLowerCase();
            if (stringWord.matches(pattern)){
                context.write(new Text(stringWord), one);
            }

        }
    }
}

Reduce.java

import java.io.IOException;

import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;

public class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context)
    throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
}   

WordCount.java

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class WordCount {
    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();

        Job job = new Job(conf, "WordCount");
        job.setJarByClass(WordCount.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        job.setNumReduceTasks(3);

        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.waitForCompletion(true);
    }
}

In the mapper, create a custom writable textpair which would be the output key that would hold filename and word from your file and value as 1.

Mapper Output:

<K,V> ==> <MytextpairWritable,new IntWritable(1)

You can get the filename in mapper with below snippet.

FileSplit fileSplit = (FileSplit)context.getInputSplit();
String filename = fileSplit.getPath().getName();

And pass these as a constructor to the custom writable class in the context.write. Something like this.

context.write(new MytextpairWritable(filename,word),new IntWritable(1));

And in the reducer side just sum up the value, so that you could get for each file how many occurrences are there for a particular word. Reducer code would be something like this.

public class Reduce extends Reducer<mytextpairWritable, IntWritable,mytextpairWritable, IntWritable> {


    public void reduce(mytextpairWritable key, Iterable<IntWritable> values , Context context)
    throws IOException, InterruptedException {
        int sum = 0;
        for(IntWritable val: values){
            sum+=val.get();
            }
       context.write(key, new IntWritable(sum));
}

Your output will be something like this.

File1,hello,2
File2,hello,3
File3,hello,1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM