簡體   English   中英

使用Hadoop Streaming API運行Java MapReduce

[英]Run java MapReduce using Hadoop Streaming API

我已經開發了自己的mapper.java和reducer.java,並希望將它們作為Hadoop工作來運行。 我已經配置了一個單節點hadoop集群並像這樣運行MapReduce:

$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file mapper.class \
-mapper 'java mapper' -file reducer.class -reducer 'java reducer' \
-input /home/hdpuser/gutenberg/* -output /home/hdpuser/gutenberg.out

packageJobJar: [mapper.class, reducer.class, /usr/hadoop/tmp/hadoop-unjar1486800984159594392/] [] /tmp/streamjob6918733297327109918.jar tmpDir=null
14/03/05 10:52:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/05 10:52:20 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/05 10:52:20 INFO mapred.FileInputFormat: Total input paths to process : 6
14/03/05 10:52:20 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop/tmp/mapred/local]
14/03/05 10:52:20 INFO streaming.StreamJob: Running job: job_201403041518_0020
14/03/05 10:52:20 INFO streaming.StreamJob: To kill this job, run:
14/03/05 10:52:20 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0020
14/03/05 10:52:20 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0020
14/03/05 10:52:21 INFO streaming.StreamJob:  map 0%  reduce 0%
14/03/05 10:52:49 INFO streaming.StreamJob:  map 100%  reduce 100%
14/03/05 10:52:49 INFO streaming.StreamJob: To kill this job, run:
14/03/05 10:52:49 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0020
14/03/05 10:52:49 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0020
14/03/05 10:52:49 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201403041518_0020_m_000001
14/03/05 10:52:49 INFO streaming.StreamJob: killJob...
Streaming Command Failed!

您能解釋一下如何使用自己的地圖精簡文件嗎?

這是mapper.java(沒有hadoop庫,只有java):

import java.util.Scanner;
import java.io.*;
public class mapper {

    public static void main(String[] args)
    {
        Scanner sc = new Scanner(System.in);
        String line = null;
        while(sc.hasNextLine())
        {
            line = sc.nextLine();
            String[] words = line.split(" ");
            for(int j=0;j<words.length;j++)
            {
                System.out.println(words[j]+" 1");
            }
        }
    }
}

這是reducer.java:

import java.util.Scanner;
import java.io.*;

public class reducer {

    public static void main(String[] args)
    {
        Scanner sc = new Scanner(System.in);
        String line = null;
        String current_word = null;
        String word = null;
        int current_count = 0;
        while(sc.hasNextLine())
        {
            line = sc.nextLine();
            String[] words = line.split(" ");
            word = words[0];
            int count = Integer.parseInt(words[1]);
            if (current_word != null && current_word.equals(word))
            {
                current_count += count;
            }
            else
            {
                if (current_word != null)
                {
                    System.out.println(current_word+" "+String.valueOf(current_count));
                }
                current_count = count;
                current_word = word;
            }
        }
        if(current_word != null && current_word.equals(word))
        {
            System.out.println(current_word+" "+String.valueOf(current_count));
        }
    }
}

這是我測試代碼的方式:

$ echo "foo foo quux labs foo bar quux" | java mapper | sort -k1,1 | java reducer
bar 1
foo 3
labs 1
quux 2

我還有什么要分享的?

這是我測試我的python代碼的方式

$ echo "foo foo quux labs foo bar quux" | python mapper.py | sort -k1,1 | python reducer.py 
bar 1
foo 3
labs    1
quux    2

在這里,我介紹了我如何執行hadoop工作,它正在工作。

$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file mapper.py \
-mapper 'python mapper.py' -file reducer.py -reducer 'python reducer.py' \
-input /home/hdpuser/gutenberg/* -output /home/hdpuser/gutenberg.out1

packageJobJar: [mapper.py, reducer.py, /usr/hadoop/tmp/hadoop-unjar272415560722407865/] [] /tmp/streamjob3055337726170986279.jar tmpDir=null
14/03/05 11:07:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/05 11:07:35 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/05 11:07:35 INFO mapred.FileInputFormat: Total input paths to process : 6
14/03/05 11:07:36 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop/tmp/mapred/local]
14/03/05 11:07:36 INFO streaming.StreamJob: Running job: job_201403041518_0021
14/03/05 11:07:36 INFO streaming.StreamJob: To kill this job, run:
14/03/05 11:07:36 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0021
14/03/05 11:07:36 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0021
14/03/05 11:07:37 INFO streaming.StreamJob:  map 0%  reduce 0%
14/03/05 11:07:43 INFO streaming.StreamJob:  map 33%  reduce 0%
14/03/05 11:07:49 INFO streaming.StreamJob:  map 67%  reduce 0%
14/03/05 11:07:53 INFO streaming.StreamJob:  map 100%  reduce 22%
14/03/05 11:08:03 INFO streaming.StreamJob:  map 100%  reduce 100%
14/03/05 11:08:05 INFO streaming.StreamJob: Job complete: job_201403041518_0021
14/03/05 11:08:05 INFO streaming.StreamJob: Output: /home/hdpuser/gutenberg.out1

我已經驗證了結果。

我也嘗試過創建jar文件並運行jar。

$ jar cvfe reducer.jar reducer reducer.class 
added manifest
adding: reducer.class(in = 1268) (out= 726)(deflated 42%)
$ jar cvfe mapper.jar mapper mapper.class 
added manifest
adding: mapper.class(in = 970) (out= 577)(deflated 40%)

$ echo "foo foo quux labs foo bar quux" | java -jar mapper.jar | sort -k1,1 | java -jar reducer.jar
bar 1
foo 3
labs 1
quux 2

然后用罐子蓋hadoop,但沒有用。

$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file mapper.jar \
-mapper 'java -jar mapper.jar' -file reducer.jar -reducer 'java -jar reducer.jar' \
-input /home/hdpuser/gutenberg/* -output /home/hdpuser/gutenberg.out3

packageJobJar: [mapper.jar, reducer.jar, /usr/hadoop/tmp/hadoop-unjar1923907702869068962/] [] /tmp/streamjob7767637153401518705.jar tmpDir=null
14/03/05 12:41:52 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/05 12:41:52 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/05 12:41:52 INFO mapred.FileInputFormat: Total input paths to process : 6
14/03/05 12:41:52 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop/tmp/mapred/local]
14/03/05 12:41:52 INFO streaming.StreamJob: Running job: job_201403041518_0023
14/03/05 12:41:52 INFO streaming.StreamJob: To kill this job, run:
14/03/05 12:41:52 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0023
14/03/05 12:41:52 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0023
14/03/05 12:41:53 INFO streaming.StreamJob:  map 0%  reduce 0%
14/03/05 12:42:19 INFO streaming.StreamJob:  map 100%  reduce 100%
14/03/05 12:42:19 INFO streaming.StreamJob: To kill this job, run:
14/03/05 12:42:19 INFO streaming.StreamJob: /usr/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201403041518_0023
14/03/05 12:42:19 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201403041518_0023
14/03/05 12:42:19 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201403041518_0023_m_000000
14/03/05 12:42:19 INFO streaming.StreamJob: killJob...
Streaming Command Failed!

這是錯誤日志:

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

如果不知道文件的組織方式,很難提供准確的解決方案。

您可以使用Java文件的main方法。

public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(calcAll.class);
    conf.setJobName("name");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(DoubleWritable.class); //or ObjectWritable, or whatever

    conf.setMapperClass(mapper.class);
    conf.setCombinerClass(reducer.class); //if your combiner is just a local reducer
    conf.setReducerClass(reducer.class);

    conf.setInputFormat(TextInputFormat.class); //assuming you are feeding it text
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path(args[0]));

    Path out1 = new Path(args[1]);
    FileOutputFormat.setOutputPath(conf, out1);

    JobClient.runJob(conf); // blocking call
}

然后,您可以使用sh文件運行該文件,就像這樣(如果您僅執行一次工作/通過,則刪除out2)

#!/usr/bin/env bash

# Export environment variable
export HADOOP_HOME=/yourPathHere

# Remove old cruft
rm ClassWithMain.jar
rm -rf MyProject_classes

# Compile the task (check your Hadoop version and Apache lib path)
javac -classpath $HADOOP_HOME/share/hadoop/common/hadoop-common-2.2.0.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar -d MyProject_classes ClassWithMain.java
jar -cvf ClassWithMain.jar -C MyProject_classes/ .

# Abort if compilation failed
exitValue=$? 
if [ $exitValue != 0 ]; then
    exit $exitValue
fi

# File names
out1=out1-`date +%Y%m%d%H%M%S`
out2=out2-`date +%Y%m%d%H%M%S`

# Create an empty file
hadoop fs -touchz ./$out2

# Submit the 1st job
hadoop jar ClassWithMain.jar org.myorg.ClassWithMain /data ./$out1 ./$out1/merged ./$out2

# Display the results
hadoop fs -cat ./$out1/merged
hadoop fs -cat ./$out2

# Cleanup
hadoop fs -rm -r ./out*

如果在運行作業后需要執行更多操作,則可以將這些行添加到主行中。

// the output is a set of files, merge them before continuing
Path out1Merged = new Path(args[2]);
Configuration config = new Configuration();
try {
    FileSystem hdfs = FileSystem.get(config);
    FileUtil.copyMerge(hdfs, out1, hdfs, out1Merged, false, config, null);
} catch (IOException e) {
    e.printStackTrace();
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM