简体   繁体   中英

What Hadoop configuration have to made at Master and Slave node both?

Do we have to modify the mapred-site.xml at both master and slave node of the hadoop cluster for the parameters such as the maximum no of map and reduce tasks to be executed parallely or only the configuration changes at master node will suffice.

Will the changes done in mapred-site.xml for the parameters such as mapred.map.child.java.opts and mapred.reduce.child.java.opts at master node will make the changes at client node also ? or we have do them both .

do we have to specify the dfs.block.size at both master and client node to make the block size different than the default value?

if not are there parameters which have to specified at both master and client node both for optimization of hadoop cluster ?

You need to change all the configuration files, conf/*-site.xml , on all the machines. Reason being, Hadoop does not have a single, global location for configuration information. Instead, each Hadoop node in the cluster has its own set of configuration files, and it is our duty to make sure that they are in sync across the system.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM