简体   繁体   中英

JobTracker in hadoop not running

Actually i installed and configured my hadoop single cluster using

http://wiki.apache.org/hadoop/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29

Now when i am using

NameNode - ( http://localhost:50070)/ (for my name node) it is running fine but for

JobTracker - ( http://localhost:50030)/ it is not working

What can be the case

Thanks

After you run $HADOOP_HOME/bin/start-all.sh, you can type a command "jps" to check whether all the neccessary hadoop proccesses have started. If everything is ok, it should be like this:

hd0@HappyUbuntu:/usr/local/hadoop$ jps
18694 NameNode
19576 TaskTracker
19309 JobTracker
19225 SecondaryNameNode
19629 Jps
18972 DataNode

It's possible that your JobTracker proccess is out of work. So check it first. If it's true, then you should look into the log files in the logs directory for a more specific reason.

  • hd0@HappyUbuntu:/usr/local/hadoop$ bin/hadoop jobtracker
  • You probably will view an error about credentials. Type:
  • sudo chown -R hd0 /usr/local/hadoop
  • Now, type "jps" and check JobTracker is running
  • Later, perhaps you need type "bin/hadoop dfsadmin -safemode leave" if you obtains "org.apache.hadoop.mapred.SafeModeException: JobTracker is in safe mode"

format your namenode using the following command.

$ <path_to_hadoop.x.xx>/bin/hadoop namenode -format

This will solve your problem.

In new version of hadoop you can monitor jobs being executed at

localhost:8088

where you will find the webUI for new hadoop

Link : https://stackoverflow.com/a/24105597/1971660

Might be a bit late to reply but i hope will be useful for other readers.

In Hadoop 2.0, the JobTracker and TaskTracker no longer exist and have been replaced by three components:

ResourceManager : a scheduler that allocates available resources in the cluster amongst the competing applications.

NodeManager : runs on each node in the cluster and takes direction from the ResourceManager. It is responsible for managing resources available on a single node.

ApplicationMaster : an instance of a framework-specific library, an ApplicationMaster runs a specific YARN job and is responsible for negotiating resources from the ResourceManager and also working with the NodeManager to execute and monitor Containers.

So as far as you are seeing ResourceManager(on NN) & NodeManager(on DN) processes you are good to go.

Well ..what distribution/version of Hadoop are using ? Its been a long time since I have used hadoop-site.xml. With Hadoop 1.0.x it is core-site.xml and mapred-site.xml. Basically, I think start-all is not starting your jobtracker at all as it is not configured properly.

请尝试这个命令--- hadoop dfsadmin -safemode leave ---更有效。

开始吧

 $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM