简体   繁体   English

在本地模式而不是集群中运行的Mapreduce作业

[英]Mapreduce Job running in local mode instead of cluster

Configuration are done for running mapreduce job in cluster mode on top of yarn but its running on local mode. 完成配置以在纱线上方以群集模式运行mapreduce作业,但以本地模式运行。 Not able to figuring out whats the issue. 无法找出问题所在。

below is yarn-site.xml (at master node) 下面是yarn-site.xml(在主节点上)

    <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>namenode:8031</value>
    </property>
     <property>
            <name>yarn.nodemanager.aux-services</name>    //node manager servi
            <value>mapreduce_shuffle</value>    //This will specify that how mapper reducer work
    </property>

    <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>namenode:8030</value>
    </property>

    <property>
            <name>yarn.resourcemanager.address</name>
            <value>namenode:8032</value>
    </property>

    <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>namenode</value>
    </property>

    <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>2042</value>
    </property>

    <property>
            <name>yarn.nodemanager.vmem-check-enabled</name>
            <value>false</value>
    </property>

yarn-site.xml (at slave node) yarn-site.xml(在从节点上)

    <property>
            <name>yarn.nodemanager.aux-services</name>    //node manager service
            <value>mapreduce_shuffle</value>    //This will specify that how mapper reducer work
    </property>

    <property>
            <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>namenode:8031</value>    //Tell the ip_address of resource tracker
    </property>

mapred-site.xml (at master node and slave node) mapred-site.xml(在主节点和从节点上)

    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
    <property>
            <name>yarn.app.mapreduce.am.resource.mb</name>
            <value>2048</value>
    </property>
    <property>
            <name>mapreduce.map.memory.mb</name>
            <value>2048</value>
    </property>

    <property>
            <name>mapreduce.reduce.memory.mb</name>
            <value>2048</value>
    </property>

on submission the job output is like below. 提交后,作业输出如下。

18/12/06 16:20:43 INFO input.FileInputFormat: Total input paths to process : 1
18/12/06 16:20:43 INFO mapreduce.JobSubmitter: number of splits:2
18/12/06 16:20:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1556004420_0001
18/12/06 16:20:43 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
18/12/06 16:20:43 INFO mapreduce.Job: Running job: job_local1556004420_0001
18/12/06 16:20:43 INFO mapred.LocalJobRunner: OutputCommitter set in config null
18/12/06 16:20:43 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
18/12/06 16:20:43 INFO mapred.LocalJobRunner: Waiting for map tasks
18/12/06 16:20:43 INFO mapred.LocalJobRunner: Starting task: attempt_local1556004420_0001_m_000000_0
18/12/06 16:20:43 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
18/12/06 16:20:43 INFO mapred.MapTask: Processing split: hdfs://namenode:9001/all-the-news/articles1.csv:0+134217728
18/12/06 16:20:43 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
18/12/06 16:20:43 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
18/12/06 16:20:43 INFO mapred.MapTask: soft limit at 83886080
18/12/06 16:20:43 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
18/12/06 16:20:43 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
18/12/06 16:20:43 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
18/12/06 16:20:44 INFO mapreduce.Job: Job job_local1556004420_0001 running in uber mode : false
18/12/06 16:20:44 INFO mapreduce.Job:  map 0% reduce 0%
18/12/06 16:20:49 INFO mapred.LocalJobRunner: map > map
18/12/06 16:20:50 INFO mapreduce.Job:  map 1% reduce 0%
18/12/06 16:20:52 INFO mapred.LocalJobRunner: map > map
18/12/06 16:20:55 INFO mapred.LocalJobRunner: map > map
18/12/06 16:20:56 INFO mapreduce.Job:  map 2% reduce 0%
18/12/06 16:20:58 INFO mapred.LocalJobRunner: map > map
18/12/06 16:21:01 INFO mapred.LocalJobRunner: map > map
18/12/06 16:21:02 INFO mapreduce.Job:  map 3% reduce 0%
18/12/06 16:21:04 INFO mapred.LocalJobRunner: map > map

Why it's running in local mode. 为什么以本地模式运行。 I am running this job on 200MB file with 3 nodes 2 datanode and 1 namenode. 我在具有3个节点2个datanode和1个namenode的200MB文件上运行此作业。

etc/hosts file is as shown below etc / hosts文件如下所示

127.0.0.1       localhost
127.0.1.1       anil-Lenovo-Product

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.8.98 namenode
192.168.8.99 datanode
192.168.8.100 datanode2

YarnUI图片

  1. first check if these configurations are effective: http://{your-resource-manager-host}:8088/conf by default or your configured UI address: http://namenode:8088/conf 请先检查以下配置是否有效:默认情况下为http:// {your-resource-manager-host}:8088 / conf或您配置的UI地址: http:// namenode:8088 / conf

  2. then make sure these properties are configured: 然后确保配置了以下属性:

    in mapred-site.xml 在mapred-site.xml中

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>

  <property>
    <name>yarn.app.mapreduce.am.env</name>
    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
  </property>

  <property>
    <name>mapreduce.map.env</name>
    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
  </property>

  <property>
    <name>mapreduce.reduce.env</name>
    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
  </property>

in yarn-site.xml 在yarn-site.xml中

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>

restart YARN service and check if it works. 重新启动YARN服务并检查其是否有效。

jobs are submitted by ClientProtocol interface, and one of its two implementations are created when service started: 作业由ClientProtocol接口提交,服务启动时将创建其两个实现之一:

  • LocalClientProtocolProvider prefix with job_local 具有Job_local的LocalClientProtocolProvider前缀
  • YarnClientProtocolProvider prefix with job_ YarnClientProtocolProvider前缀为job_

according to MRConfig.FRAMEWORK_NAME(value is "mapreduce.framework.name") configuration, and its valid options are classic , yarn , local . 根据MRConfig.FRAMEWORK_NAME(值是“ mapreduce.framework.name”)配置,其有效选项是classicyarnlocal

Good luck! 祝好运!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM