簡體   English   中英

為什么Hadoop即使存在也無法在本地模式下找到該文件?

[英]Why is Hadoop unable to find this file in local mode even though it exists?

通過Maven在本地模式下運行Hadoop時出現此錯誤。

15/03/24 12:45:24 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer

15/03/24 12:45:24 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/03/24 12:45:24 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/03/24 12:45:24 INFO mapred.MapTask: soft limit at 83886080
15/03/24 12:45:24 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/03/24 12:45:24 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/03/24 12:45:25 INFO mapreduce.Job: Job job_local977124269_0001 running in uber mode : false
15/03/24 12:45:25 INFO mapreduce.Job:  map 0% reduce 0%
15/03/24 12:45:26 INFO mapred.MapTask: Starting flush of map output
15/03/24 12:45:26 INFO mapred.LocalJobRunner: Map task executor complete.
15/03/24 12:45:26 WARN mapred.LocalJobRunner: job_local977124269_0001
java.lang.Exception: java.io.FileNotFoundException: file:<home_dir>/src/test/data/doc/<file> (No such file or directory)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.io.FileNotFoundException: file:<home_dir>/src/test/data/doc/<file> (No such file or directory)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:146)
    at package.DocumentRecordReader.parseDocument(DocumentRecordReader.java:186)
    at package.DocumentRecordReader.nextKeyValue(DocumentRecordReader.java:130)
    at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
15/03/24 12:45:26 INFO mapreduce.Job: Job job_local977124269_0001 failed with state FAILED due to: NA
15/03/24 12:45:26 INFO mapreduce.Job: Counters: 0
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Command execution failed.

如您所見,由於Hadoop無法在src / test / data / doc目錄中找到文件,因此maven驗證目標失敗。 我可以手動看到該路徑正確並且該文件確實存在。

有人可以告訴我這里到底是什么問題嗎?

我認為您的環境變量中未設置file:home_dir。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM