简体   繁体   中英

Hadoop Map Reduce Query

I was trying to use HADOOP MadReduce to calculate the sum of the weights of all incoming edges for each node in a graph.The input is in a .tsv format and it looks like:

src tgt weight

X 102 1

X 200 1

X 123 5

Y 245 1

Y 101 1

Z 99 2

X 145 3

Y 24 1

A 21 5

. . .

. . .

The expected output is:

src SUM(weight)

X 10

Y 3

Z 2

A 5

. .

. .

I've used an example code of WordCount from hadoop ( http://www.cloudera.com/content/cloudera/en/documentation/hadoop-tutorial/CDH5/Hadoop-Tutorial/ht_wordcount1_source.html?scroll=topic_5_1 ) as the reference. I tried manipulating the code, but all my efforts ended up in vain.

I am pretty new to JAVA and HADOOP. I have shared my code. Kindly help me figure out what is wrong with the code.

Thanks.

Code:

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;

public class Task1 {

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable value_parsed = new IntWritable();
private Text word = new Text();

public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
Text keys = new Text();
int sum;
while(tokenizer.hasMoreTokens())
{
    tokenizer.nextToken();
    keys.set(tokenizer.nextToken());
    sum = Integer.parseInt(tokenizer.nextToken());
    output.collect(keys, new IntWritable(sum));
}
}
}

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException    {
    int sum = 0;
    while (values.hasNext()) {
    sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}

public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(Task1.class);
conf.setJobName("Task1");

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf);
}
}

You have to change your code little bit.

    while (tokenizer.hasMoreTokens()) {
        tokenizer.nextToken(); // this value is of first column
        keys.set(tokenizer.nextToken()); // this is wrong --you have to set
                                            // first column as key not
                                            // second column
        sum = Integer.parseInt(tokenizer.nextToken()); //  here 
                                                        // third column
        output.collect(keys, new IntWritable(sum));
    }

Hope this could help you

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM