[英]Apache crunch unable to write output
可能是疏忽大意,但我無法找出為什么Apache Crunch不會為我正在編寫的用於學習Crunch的非常簡單的程序將輸出寫到文件中的原因。
這是代碼:
import org.apache.crunch.Pipeline;
import org.apache.hadoop.conf.Configuration;
....
private Pipeline pipeline;
private Configuration etlConf;
....
this.etlConf = getConf();
this.pipeline = new MRPipeline(TestETL.class, etlConf);
....
// Read file
logger.info("Reading input file: " + inputFileURI.toString());
PCollection<String> input = pipeline.readTextFile(inputFileURI.toString());
System.out.println("INPUT SIZE = " + input.asCollection().getValue().size());
// Write file
logger.info("Writing Final output to file: " + outputFileURI.toString());
input.write(
To.textFile(outputFileURI.toString()),
WriteMode.OVERWRITE
);
這是我使用hadoop執行此jar時看到的日志記錄:
18/12/31 09:41:51 INFO etl.TestClass: Executing Test run
18/12/31 09:41:51 INFO etl.TestETL: Reading input file: /user/sw029693/process_analyzer/input/input.txt
INPUT SIZE = 3
18/12/31 09:41:51 INFO etl.TestETL: Writing Final output to file:
/user/sw029693/process_analyzer/output/occurences
18/12/31 09:41:51 INFO impl.FileTargetImpl: Will write output files to new path: /user/sw029693/process_analyzer/output/occurences
18/12/31 09:41:51 INFO etl.TestETL: Cleaning-up TestETL run
18/12/31 09:41:51 INFO etl.TestETL: ETL completed with status 0.
輸入文件非常簡單,看起來像這樣:
this is line 1
this is line 2
this is line 3
盡管日志記錄表明應該對輸出位置進行寫,但是我看不到正在創建任何文件。 有什么想法嗎?
package com.hadoop.crunch;
import java.io.*;
import java.util.Collection;
import java.util.Iterator;
import org.apache.crunch.*;
import org.apache.crunch.impl.mr.MRPipeline;
import org.apache.crunch.io.From;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.util.*;
import org.apache.log4j.Logger;
public class App extends Configured implements Tool, Serializable{
private static final long serialVersionUID = 1L;
private static Logger LOG = Logger.getLogger(App.class);
@Override
public int run(String[] args) throws Exception {
final Path fileSource = new Path(args[0]);
final Path outFileName = new Path(args[1], "event-" + System.currentTimeMillis() + ".txt");
//MRPipeline translates the overall pipeline into one or more MapReduce jobs
Pipeline pipeline = new MRPipeline(App.class, getConf());
//Specify the input data to the pipeline.
//The input data is contained in PCollection
PCollection<String> inDataPipe = pipeline.read(From.textFile(fileSource));
//inject an operation into the crunch data pipeline
PObject<Collection<String>> dataCollection = inDataPipe.asCollection();
//iterate over the collection
Iterator<String> iterator = dataCollection.getValue().iterator();
FileSystem fs = FileSystem.getLocal(getConf());
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(fs.create(outFileName, true)));
while(iterator.hasNext()){
String data = iterator.next().toString();
bufferedWriter.write(data);
bufferedWriter.newLine();
}
bufferedWriter.close();
//Start the execution of the crunch pipeline, trigger the creation & execution of MR jobs
PipelineResult result = pipeline.done();
return result.succeeded() ? 0 : 1;
}
public static void main(String[] args) {
if (args.length != 2)throw new RuntimeException("Usage: hadoop jar <inputPath> <outputPath>");
try {
ToolRunner.run(new Configuration(), new App(), args );
} catch (Exception e) {
LOG.error(e.getLocalizedMessage());
}
}
}
用法:使用參數作為Java程序運行:第一個arg是輸入文件名或目錄,第二個arg是輸出文件目錄。 out文件名是event-Timestamp,請記住args {0}和args {1}之間只有一個空格。 /user/sw029693/process_analyzer/input/input.txt / user / sw029693 / process_analyzer / input /
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.