[英]How to Output Mapper Key as Text to the Reducer in hadoop
我正在嘗試將String轉換為Text對象,並將值作為MapOutputKey輸出到Reducer。 但是,在化簡器中,鍵看起來像一個黑色空間,其值(而不是MapOutputValue)作為存儲位置。 但是,如果我將String硬編碼到文本對象中並將其發送出去,我就能在Reducer中看到正確的值。 編輯:-現在這是非常奇怪和有趣。 相同的程序對於較小的數據集也可以正常工作,即,從mapper發送的文本密鑰在減速器中接收。 但是,如果我使用原始數據集,則從map發送的文本鍵不會反映在化簡器中。 我不知道,關於Hadoop的某些事情我還不了解。 這是完整的源代碼:-
public class InvertedIndex {
public InvertedIndex(){}
public static class InvertedMap extends Mapper<IntWritable,Text,Text,DocAttributes> {
Integer docNum;
String word;
Integer docFrequency;
Integer termFrequency;
private Text key = new Text();
@Override
public void map(IntWritable mapIn,Text mapValIn,Context context) throws IOException,InterruptedException{
try{
String line = mapValIn.toString();
String[] words = line.trim().split("\\s");
List<String> wordList = new ArrayList<String>(Arrays.asList(words));
String k;
for(int i=0; i<wordList.size();i++){
k = wordList.get(i);
//Text key = new Text(k);
// DocAttributes da = new DocAttributes();
key.set(k);
int sum=1;
for(int j=i;j<wordList.size();j++){
if(wordList.get(i).matches(wordList.get(j)) && j>i){
sum++;
docNum = mapIn.get();
docFrequency = sum;
word = wordList.get(i);
termFrequency = sum;
wordList.remove(j);
}
else{
docNum = mapIn.get();
docFrequency = sum;
word = wordList.get(i);
termFrequency = sum;
}
}
if(i == words.length-1){
docNum = mapIn.get();
docFrequency = sum;
word = wordList.get(i);
termFrequency = sum;
}
context.write(key, new DocAttributes(docNum,word,docFrequency,termFrequency));
}
}
catch(NullPointerException ne){
ne.printStackTrace();
}
}
}
public static class InvertedReduce extends Reducer<Text,DocAttributes,LongWritable,DocAttributes>{
@Override
public void reduce(Text key, Iterable<DocAttributes> value,Context context) throws IOException,InterruptedException{
Iterator<DocAttributes> iterator = value.iterator();
DocAttributes doc = new DocAttributes();
List<DocAttributes> list = new ArrayList<DocAttributes>();
while(iterator.hasNext()){
list.add(new DocAttributes(iterator.next()));
}
Integer docFrequency = 0;
for(DocAttributes d : list){
docFrequency += d.getDocFrequency();
doc.setDocNum(d.getDocNum());
doc.setWord(d.getWord());
doc.setDocFrequency(docFrequency);
doc.setTermFrequency(d.getTermFrequency());
}
context.write(new LongWritable(), doc);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
job.setMapperClass(InvertedMap.class);
// job.setCombinerClass(InvertedCombine.class);
job.setReducerClass(InvertedReduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DocAttributes.class);
job.setJarByClass(InvertedIndex.class);
TextInputFormat.addInputPath(job, new Path(args[0]));
TextOutputFormat.setOutputPath(job,new Path(args[1]));
job.setInputFormatClass(DocInput.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.submit();
job.waitForCompletion(true);
}
}
您正在使用每個循環在迭代時修改ArrayList'wordList'。是否可以嘗試使用Iterator循環在迭代時刪除元素(Iterator.remove())。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.