簡體   English   中英

如何在Amazon EMR上調整Hadoop MapReduce參數?

[英]How to tune the Hadoop MapReduce parameters on Amazon EMR?

我的MR作業在地圖上結束時100%減少了35%,並顯示許多錯誤消息,類似於running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 3.7 GB of 15 GB virtual memory used. Killing container. running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 3.7 GB of 15 GB virtual memory used. Killing container.

我的輸入*.bz2文件約為4GB,如果我解壓縮該文件,則文件大小約為38GB,在Amazon EMR上以one Mastertwo slavers從服務器運行此作業大約需要一個小時。

我的問題是
-為什么這項工作占用了大量內存?
-為什么這項工作要花一個小時? 通常在一個小型的4節點群集上運行40GB的字數統計作業大約需要10分鍾
-如何調整MR參數以解決此問題?
-哪種Amazon EC2實例類型最適合解決此問題?

請參考以下日志:
-物理內存(字節)快照= 43327889408 => 43.3GB
-虛擬內存(字節)快照= 108950675456 => 108.95GB
-總提交堆使用量(字節)= 34940649472 => 34.94GB

我提出的解決方案如下,但是我不確定它們是否正確
-使用更大的Amazon EC2實例,其內存至少為8GB
-使用以下代碼調整MR參數

版本1:

Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "jobtest1");
//don't kill the container, if the physical memory exceeds "mapreduce.reduce.memory.mb" or "mapreduce.map.memory.mb"
conf.setBoolean("yarn.nodemanager.pmem-check-enabled", false);
conf.setBoolean("yarn.nodemanager.vmem-check-enabled", false);

版本2:

Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "jobtest2");
//conf.set("mapreduce.input.fileinputformat.split.minsize","3073741824");                                                                   
conf.set("mapreduce.map.memory.mb", "8192");                                     
conf.set("mapreduce.map.java.opts", "-Xmx6144m");                                         
conf.set("mapreduce.reduce.memory.mb", "8192");                                         
conf.set("mapreduce.reduce.java.opts", "-Xmx6144m");                                             

日志:

15/11/08 11:37:27 INFO mapreduce.Job:  map 100% reduce 35%
15/11/08 11:37:27 INFO mapreduce.Job: Task Id : attempt_1446749367313_0006_r_000006_2, Status : FAILED
Container [pid=24745,containerID=container_1446749367313_0006_01_003145] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 3.7 GB of 15 GB virtual memory used. Killing container.
Dump of the process-tree for container_1446749367313_0006_01_003145 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 24745 24743 24745 24745 (bash) 0 0 9658368 291 /bin/bash -c /usr/lib/jvm/java-openjdk/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx2304m -Djava.io.tmpdir=/mnt1/yarn/usercache/ec2-user/appcache/application_1446749367313_0006/container_1446749367313_0006_01_003145/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1446749367313_0006/container_1446749367313_0006_01_003145 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild **.***.***.*** 32846 attempt_1446749367313_0006_r_000006_2 3145 1>/var/log/hadoop-yarn/containers/application_1446749367313_0006/container_1446749367313_0006_01_003145/stdout 2>/var/log/hadoop-yarn/containers/application_1446749367313_0006/container_1446749367313_0006_01_003145/stderr  
    |- 24749 24745 24745 24745 (java) 14124 1281 3910426624 789477 /usr/lib/jvm/java-openjdk/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2304m -Djava.io.tmpdir=/mnt1/yarn/usercache/ec2-user/appcache/application_1446749367313_0006/container_1446749367313_0006_01_003145/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1446749367313_0006/container_1446749367313_0006_01_003145 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild **.***.***.*** 32846 attempt_1446749367313_0006_r_000006_2 3145 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

15/11/08 11:37:28 INFO mapreduce.Job:  map 100% reduce 25%
15/11/08 11:37:30 INFO mapreduce.Job:  map 100% reduce 26%
15/11/08 11:37:37 INFO mapreduce.Job:  map 100% reduce 27%
15/11/08 11:37:42 INFO mapreduce.Job:  map 100% reduce 28%
15/11/08 11:37:53 INFO mapreduce.Job:  map 100% reduce 29%
15/11/08 11:37:57 INFO mapreduce.Job:  map 100% reduce 34%
15/11/08 11:38:02 INFO mapreduce.Job:  map 100% reduce 35%
15/11/08 11:38:13 INFO mapreduce.Job:  map 100% reduce 36%
15/11/08 11:38:22 INFO mapreduce.Job:  map 100% reduce 37%
15/11/08 11:38:35 INFO mapreduce.Job:  map 100% reduce 42%
15/11/08 11:38:36 INFO mapreduce.Job:  map 100% reduce 100%
15/11/08 11:38:36 INFO mapreduce.Job: Job job_1446749367313_0006 failed with state FAILED due to: Task failed task_1446749367313_0006_r_000001
Job failed as tasks failed. failedMaps:0 failedReduces:1

15/11/08 11:38:36 INFO mapreduce.Job: Counters: 43
    File System Counters
        FILE: Number of bytes read=11806418671
        FILE: Number of bytes written=22240791936
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=16874
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=59
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
        S3: Number of bytes read=3942336319
        S3: Number of bytes written=0
        S3: Number of read operations=0
        S3: Number of large read operations=0
        S3: Number of write operations=0
    Job Counters 
        Failed reduce tasks=22
        Killed reduce tasks=5
        Launched map tasks=59
        Launched reduce tasks=27
        Data-local map tasks=59
        Total time spent by all maps in occupied slots (ms)=114327828
        Total time spent by all reduces in occupied slots (ms)=131855700
        Total time spent by all map tasks (ms)=19054638
        Total time spent by all reduce tasks (ms)=10987975
        Total vcore-seconds taken by all map tasks=19054638
        Total vcore-seconds taken by all reduce tasks=10987975
        Total megabyte-seconds taken by all map tasks=27438678720
        Total megabyte-seconds taken by all reduce tasks=31645368000
    Map-Reduce Framework
        Map input records=728795619
        Map output records=728795618
        Map output bytes=50859151614
        Map output materialized bytes=10506705085
        Input split bytes=16874
        Combine input records=0
        Spilled Records=1457591236
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=150143
        CPU time spent (ms)=14360870
        Physical memory (bytes) snapshot=43327889408
        Virtual memory (bytes) snapshot=108950675456
        Total committed heap usage (bytes)=34940649472
    File Input Format Counters 
        Bytes Read=0

我不確定Amazon EMR。 因此關於地圖減少的幾點考慮:

  1. 盡管bzip2的壓縮效果比gzip的壓縮效果好,但它的速度較慢。 bzip2的解壓縮速度比其壓縮速度快,但仍比其他格式慢。 因此,從總體上講,與十分鍾內運行的40gb字數統計程序相比,您已經擁有了此功能(假設40gb程序沒有壓縮功能)。 下一個問題是,但是要慢得多

  2. 但是,一小時后您的工作仍然失敗。 請確認。 因此,只有當作業成功運行時,我們才能實現性能。 由於這個原因,讓我們想一想為什么它會失敗。 您遇到內存錯誤。 同樣基於錯誤,容器在reducer階段失敗(因為mapper階段完成了100%)。 大多數情況下,甚至沒有一個減速器可能會成功。 即使32%的人可能欺騙您以為某些減速器已運行,但該百分比可能是由於在第一個減速器運行之前進行了清理工作。 一種確認的方式是,查看是否生成了任何reducer輸出文件。

確認沒有任何變徑工具運行后,您可以根據版本2增加容器的內存。

您的版本1將幫助您查看是否只有一個特定的容器引起了問題並允許作業完成。

輸入文件的大小應得出減速器的數量。 除非要壓縮Mapper輸出數據,否則標准為每1 GB 1個減速器。 因此,在這種情況下,理想數字應該至少為38。嘗試將命令行選項作為-D mapred.reduce.tasks = 40傳遞,看看是否有任何更改。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM