簡體   English   中英

500GB或1TB上的Hadoop 2.6和2.7 Apache Terasort

[英]Hadoop 2.6 and 2.7 Apache Terasort on 500GB or 1TB

在運行地圖和減速器啟動時,它從0變為100失敗,並顯示:

15/05/12 07:21:27 INFO terasort.TeraSort: starting
15/05/12 07:21:27 WARN util.NativeCodeLoader: Unable to load native-hadoop     library for your platform... using builtin-java classes where applicable
15/05/12 07:21:29 INFO input.FileInputFormat: Total input paths to process :    18000

Spent 1514ms computing base-splits.
Spent 109ms computing TeraScheduler splits.
Computing input splits took 1624ms
Sampling 10 splits of 18000
Making 1 from 100000 sampled records
Computing parititions took 315ms
Spent 1941ms computing partitions.
15/05/12 07:21:30 INFO client.RMProxy: Connecting to ResourceManager at    n1/192.168.2.1:8032
15/05/12 07:21:31 INFO mapreduce.JobSubmitter: number of splits:18000
15/05/12 07:21:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1431389162125_0001
15/05/12 07:21:31 INFO impl.YarnClientImpl: Submitted application application_1431389162125_0001
15/05/12 07:21:31 INFO mapreduce.Job: The url to track the job: http://n1:8088/proxy/application_1431389162125_0001/
15/05/12 07:21:31 INFO mapreduce.Job: Running job: job_1431389162125_0001
15/05/12 07:21:37 INFO mapreduce.Job: Job job_1431389162125_0001 running in uber mode : false
15/05/12 07:21:37 INFO mapreduce.Job:  map 0% reduce 0%
15/05/12 07:21:47 INFO mapreduce.Job:  map 1% reduce 0%
15/05/12 07:22:01 INFO mapreduce.Job:  map 2% reduce 0%
15/05/12 07:22:13 INFO mapreduce.Job:  map 3% reduce 0%
15/05/12 07:22:25 INFO mapreduce.Job:  map 4% reduce 0%
15/05/12 07:22:38 INFO mapreduce.Job:  map 5% reduce 0%
15/05/12 07:22:50 INFO mapreduce.Job:  map 6% reduce 0%
15/05/12 07:23:02 INFO mapreduce.Job:  map 7% reduce 0%
15/05/12 07:23:15 INFO mapreduce.Job:  map 8% reduce 0%
15/05/12 07:23:27 INFO mapreduce.Job:  map 9% reduce 0%
15/05/12 07:23:40 INFO mapreduce.Job:  map 10% reduce 0%
15/05/12 07:23:52 INFO mapreduce.Job:  map 11% reduce 0%
15/05/12 07:24:02 INFO mapreduce.Job:  map 100% reduce 100%
15/05/12 07:24:06 INFO mapreduce.Job: Job job_1431389162125_0001 failed with  state FAILED due to: Task failed task_1431389162125_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

這是默認配置,每次都會失敗。

我插入到xml中的所有配置我都已注釋掉以查找此問題,但僅在還原開始時,我仍然遇到工作失敗的問題。

Yarn處理資源管理,還提供可以使用MapReduce和實時工作負載的批處理工作負載。

可以在“紗線”容器級別以及映射器和化簡器級別設置內存設置。 請求內存以Yarn容器大小為增量。 映射器和化簡器任務在容器內運行。

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

上面的參數描述了map-reduce任務的內存上限,如果此任務預訂的內存超過該限制,則相應的容器將被殺死。

這些參數確定可以分別分配給映射器和縮減任務的最大內存量。 讓我們看一個例子:Mapper由配置參數mapreduce.map.memory.mb中定義的內存上限限制。

但是,如果yarn.scheduler.minimum-allocation-mb的值大於mapreduce.map.memory.mb的值,則將尊重yarn.scheduler.minimum-allocation-mb並指定該大小的容器出來。

該參數需要仔細設置,如果設置不正確,可能會導致性能下降或內存不足錯誤。

mapreduce.reduce.java.opts and mapreduce.map.java.opts

此屬性值必須小於mapreduce.map.memory.mb / mapreduce.reduce.memory.mb中定義的map / reduce任務的上限,因為它應該適合map / reduce任務的內存分配。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM