简体   繁体   中英

Could the slave node run Hadoop Map/Reduce Job?

I installed Hadoop on two nodes ( Master and Slave nodes). I would ask if i can run Map/Reduce job from the Slave Machine or use the HDFS from the Slave machine. There is no problem by running the map/reduce job from Master Node, but when I tried to run the Map/Reduce Job from Slave node, the following Error showed up.

Java.net.connectionException failed on connection exception.

You can run jobs from any machine in the cluster so long as each node has the proper jobtracker location property configured. In fact, you can run jobs from any machine, including your personal desktop or laptop, so long as you have a connection to the server (that is, no firewalls are in your way) and Hadoop is configured with the proper jobtracker and namenode.

Make sure mapred.job.tracker is configured on the slave to the master's host and port. Something like master.com:8021 . And make sure you can establish a connection between the slave and master, for example by running telnet master.com 8021 . I assume you can establish the connection because the master (jobtracker) can schedule tasks on the tasktracker.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM