简体   繁体   中英

Run java MapReduce using Hadoop Streaming API

I have developed my own mapper.java and reducer.java and want to run them as a hadoop job. I have configured a single-node hadoop cluster and run MapReduce like this:

$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file mapper.class \
-mapper 'java mapper' -file reducer.class -reducer 'java reducer' \
-input /home/hdpuser/gutenberg/* -output /home/hdpuser/gutenberg.out

packageJobJar: [mapper.class, reducer.class, /usr/hadoop/tmp/hadoop-unjar1486800984159594392/] [] /tmp/streamjob6918733297327109918.jar tmpDir=null
14/03/05 10:52:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/05 10:52:20 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/05 10:52:20 INFO mapred.FileInputFormat: Total input paths to process : 6
14/03/05 10:52:20 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop/tmp/mapred/local]
14/03/05 10:52:20 INFO streaming.StreamJob: Running job: job_201403041518_0020
14/03/05 10:52:20 INFO streaming.StreamJob: To kill this job, run:
14/03/05 10:52:20 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0020
14/03/05 10:52:20 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0020
14/03/05 10:52:21 INFO streaming.StreamJob:  map 0%  reduce 0%
14/03/05 10:52:49 INFO streaming.StreamJob:  map 100%  reduce 100%
14/03/05 10:52:49 INFO streaming.StreamJob: To kill this job, run:
14/03/05 10:52:49 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0020
14/03/05 10:52:49 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0020
14/03/05 10:52:49 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201403041518_0020_m_000001
14/03/05 10:52:49 INFO streaming.StreamJob: killJob...
Streaming Command Failed!

Can you explain how to use my own map reduce files?

Here is mapper.java (no hadoop libs, just java):

import java.util.Scanner;
import java.io.*;
public class mapper {

    public static void main(String[] args)
    {
        Scanner sc = new Scanner(System.in);
        String line = null;
        while(sc.hasNextLine())
        {
            line = sc.nextLine();
            String[] words = line.split(" ");
            for(int j=0;j<words.length;j++)
            {
                System.out.println(words[j]+" 1");
            }
        }
    }
}

Here is reducer.java:

import java.util.Scanner;
import java.io.*;

public class reducer {

    public static void main(String[] args)
    {
        Scanner sc = new Scanner(System.in);
        String line = null;
        String current_word = null;
        String word = null;
        int current_count = 0;
        while(sc.hasNextLine())
        {
            line = sc.nextLine();
            String[] words = line.split(" ");
            word = words[0];
            int count = Integer.parseInt(words[1]);
            if (current_word != null && current_word.equals(word))
            {
                current_count += count;
            }
            else
            {
                if (current_word != null)
                {
                    System.out.println(current_word+" "+String.valueOf(current_count));
                }
                current_count = count;
                current_word = word;
            }
        }
        if(current_word != null && current_word.equals(word))
        {
            System.out.println(current_word+" "+String.valueOf(current_count));
        }
    }
}

Here is how I tested my code:

$ echo "foo foo quux labs foo bar quux" | java mapper | sort -k1,1 | java reducer
bar 1
foo 3
labs 1
quux 2

Anything else I should share?

Here is how I have tested my python code

$ echo "foo foo quux labs foo bar quux" | python mapper.py | sort -k1,1 | python reducer.py 
bar 1
foo 3
labs    1
quux    2

Here I how I executed the hadoop job and it's working.

$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file mapper.py \
-mapper 'python mapper.py' -file reducer.py -reducer 'python reducer.py' \
-input /home/hdpuser/gutenberg/* -output /home/hdpuser/gutenberg.out1

packageJobJar: [mapper.py, reducer.py, /usr/hadoop/tmp/hadoop-unjar272415560722407865/] [] /tmp/streamjob3055337726170986279.jar tmpDir=null
14/03/05 11:07:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/05 11:07:35 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/05 11:07:35 INFO mapred.FileInputFormat: Total input paths to process : 6
14/03/05 11:07:36 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop/tmp/mapred/local]
14/03/05 11:07:36 INFO streaming.StreamJob: Running job: job_201403041518_0021
14/03/05 11:07:36 INFO streaming.StreamJob: To kill this job, run:
14/03/05 11:07:36 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0021
14/03/05 11:07:36 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0021
14/03/05 11:07:37 INFO streaming.StreamJob:  map 0%  reduce 0%
14/03/05 11:07:43 INFO streaming.StreamJob:  map 33%  reduce 0%
14/03/05 11:07:49 INFO streaming.StreamJob:  map 67%  reduce 0%
14/03/05 11:07:53 INFO streaming.StreamJob:  map 100%  reduce 22%
14/03/05 11:08:03 INFO streaming.StreamJob:  map 100%  reduce 100%
14/03/05 11:08:05 INFO streaming.StreamJob: Job complete: job_201403041518_0021
14/03/05 11:08:05 INFO streaming.StreamJob: Output: /home/hdpuser/gutenberg.out1

I have verified the results.

I have also tried creating jar files and running the jars.

$ jar cvfe reducer.jar reducer reducer.class 
added manifest
adding: reducer.class(in = 1268) (out= 726)(deflated 42%)
$ jar cvfe mapper.jar mapper mapper.class 
added manifest
adding: mapper.class(in = 970) (out= 577)(deflated 40%)

$ echo "foo foo quux labs foo bar quux" | java -jar mapper.jar | sort -k1,1 | java -jar reducer.jar
bar 1
foo 3
labs 1
quux 2

Then used jars for hadoop but didn't work.

$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file mapper.jar \
-mapper 'java -jar mapper.jar' -file reducer.jar -reducer 'java -jar reducer.jar' \
-input /home/hdpuser/gutenberg/* -output /home/hdpuser/gutenberg.out3

packageJobJar: [mapper.jar, reducer.jar, /usr/hadoop/tmp/hadoop-unjar1923907702869068962/] [] /tmp/streamjob7767637153401518705.jar tmpDir=null
14/03/05 12:41:52 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/05 12:41:52 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/05 12:41:52 INFO mapred.FileInputFormat: Total input paths to process : 6
14/03/05 12:41:52 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop/tmp/mapred/local]
14/03/05 12:41:52 INFO streaming.StreamJob: Running job: job_201403041518_0023
14/03/05 12:41:52 INFO streaming.StreamJob: To kill this job, run:
14/03/05 12:41:52 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0023
14/03/05 12:41:52 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0023
14/03/05 12:41:53 INFO streaming.StreamJob:  map 0%  reduce 0%
14/03/05 12:42:19 INFO streaming.StreamJob:  map 100%  reduce 100%
14/03/05 12:42:19 INFO streaming.StreamJob: To kill this job, run:
14/03/05 12:42:19 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0023
14/03/05 12:42:19 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0023
14/03/05 12:42:19 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201403041518_0023_m_000000
14/03/05 12:42:19 INFO streaming.StreamJob: killJob...
Streaming Command Failed!

Here is log of error:

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

It's difficult to provide an exact solution without knowing how your files are organized.

You could use the main method of a Java file.

public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(calcAll.class);
    conf.setJobName("name");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(DoubleWritable.class); //or ObjectWritable, or whatever

    conf.setMapperClass(mapper.class);
    conf.setCombinerClass(reducer.class); //if your combiner is just a local reducer
    conf.setReducerClass(reducer.class);

    conf.setInputFormat(TextInputFormat.class); //assuming you are feeding it text
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path(args[0]));

    Path out1 = new Path(args[1]);
    FileOutputFormat.setOutputPath(conf, out1);

    JobClient.runJob(conf); // blocking call
}

Then you can run the thing with a sh file, like this (remove out2 if you only do one job/pass)

#!/usr/bin/env bash

# Export environment variable
export HADOOP_HOME=/yourPathHere

# Remove old cruft
rm ClassWithMain.jar
rm -rf MyProject_classes

# Compile the task (check your Hadoop version and Apache lib path)
javac -classpath $HADOOP_HOME/share/hadoop/common/hadoop-common-2.2.0.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar -d MyProject_classes ClassWithMain.java
jar -cvf ClassWithMain.jar -C MyProject_classes/ .

# Abort if compilation failed
exitValue=$? 
if [ $exitValue != 0 ]; then
    exit $exitValue
fi

# File names
out1=out1-`date +%Y%m%d%H%M%S`
out2=out2-`date +%Y%m%d%H%M%S`

# Create an empty file
hadoop fs -touchz ./$out2

# Submit the 1st job
hadoop jar ClassWithMain.jar org.myorg.ClassWithMain /data ./$out1 ./$out1/merged ./$out2

# Display the results
hadoop fs -cat ./$out1/merged
hadoop fs -cat ./$out2

# Cleanup
hadoop fs -rm -r ./out*

If you need to do more operations after you run the job, you can add these lines to the main

// the output is a set of files, merge them before continuing
Path out1Merged = new Path(args[2]);
Configuration config = new Configuration();
try {
    FileSystem hdfs = FileSystem.get(config);
    FileUtil.copyMerge(hdfs, out1, hdfs, out1Merged, false, config, null);
} catch (IOException e) {
    e.printStackTrace();
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM