[英]Hadoop WordCount for multiple words not getting the public variables
我有一个简单的 Hadoop 程序,我需要为 mu 大学的一篇论文实施该程序。 这是一个替代的 WordCount 问题,它应该使组合的 Text() 字符串具有 n 个单词,并且仅与 reducer 总结那些 >= 大于 k 出现的字符串。 我已将 n 和 k 整数放在输入和 output 文件夹(args[3] 和 args[4])之后从命令行捕获。 问题是 n 和 k 在 mapper 和 reducer 中使用时是空的,尽管从命令中正确获取了它们的值。 代码如下,有什么问题?
public class MultiWordCount {
public static int n;
public static int k;
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
private StringBuilder phrase = new StringBuilder();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
for (int i=0; i<n; i++) {
if (itr.hasMoreTokens()) {
phrase.append(itr.nextToken());
phrase.append(" ");
}
}
word.set(phrase.toString());
context.write(word, one);
phrase.setLength(0);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
if(sum >= k) {
result.set(sum);
context.write(key, result);
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
n = Integer.parseInt(args[2]);
k = Integer.parseInt(args[3]);
Job job = Job.getInstance(conf, "multi word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
尽管您基于 Java 的逻辑在这里看起来不错,但 Map 和 Reduce function 在 Hadoop 中实现的功能比人们想象的更加短视或独立。 To be more precise, you declare public static variables in the parent class and initialize them in the driver/main function, but the mapper/reducer instances do not have any access to the driver, but only to their strict scopes within the TokenizerMapper
and IntSumReducer
类。 这就是为什么当你查看映射器和减速器时n
和k
看起来是空的。
由于您的程序只有一个作业并且在单个 Hadoop Configuration
中执行,因此此处不需要Hadoop 计数器。 您可以通过TokenizerMapper
和IntSumReducer
类中的setup
函数,在执行 Map 和 Reduce 函数之前声明每个映射器和化简器将访问的基于Configuration
的值。
要声明这些类型的值以便将它们传递给 MapReduce 函数,您可以在驱动程序/主方法中执行以下操作:
conf.set("n", args[2]);
然后在TokenizerMapper
和IntSumReducer
的setup
方法中访问此值(同时将其从String
转换为int
):
n = Integer.parseInt(context.getConfiguration().get("n"));
所以程序可以如下所示:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.Counters;
import java.io.*;
import java.io.IOException;
import java.util.*;
import java.nio.charset.StandardCharsets;
public class MultiWordCount
{
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>
{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
private StringBuilder phrase = new StringBuilder();
private int n;
protected void setup(Context context) throws IOException, InterruptedException
{
n = Integer.parseInt(context.getConfiguration().get("n"));
}
public void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens())
{
for (int i = 0; i < n; i++)
{
if (itr.hasMoreTokens())
{
phrase.append(itr.nextToken());
phrase.append(" ");
}
}
word.set(phrase.toString());
context.write(word, one);
phrase.setLength(0);
}
}
}
public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable>
{
private IntWritable result = new IntWritable();
private int k;
protected void setup(Context context) throws IOException, InterruptedException
{
k = Integer.parseInt(context.getConfiguration().get("k"));
}
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException
{
int sum = 0;
for (IntWritable val : values)
sum += val.get();
if(sum >= k)
{
result.set(sum);
context.write(key, result);
}
}
}
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
conf.set("n", args[2]);
conf.set("k", args[3]);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(new Path(args[1])))
fs.delete(new Path(args[1]), true);
Job job = Job.getInstance(conf, "Multi Word Count");
job.setJarByClass(MultiWordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
对于n=3
和k=1
,output 看起来像这样(使用一些带有弗朗兹卡夫卡句子的文本文件,如此处所示):
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.