繁体   English   中英

尝试格式化namenode时无法找到或加载主类; 在MAC OS X 10.9.2上安装hadoop

[英]Could not find or load main class when trying to format namenode; hadoop installation on MAC OS X 10.9.2

我正在尝试使用hadoop在我的MAC OS X 10.9.2上完成开发单节点集群设置。 我已经尝试了各种在线教程,其中最新的就是这个教程。 总结一下我的所作所为:

1) $ brew install hadoop

这在/usr/local/Cellar/hadoop/2.2.0中安装了hadoop 2.2.0

2)配置的环境变量。 这是我的.bash_profile的相关部分:

### Java_HOME 
export JAVA_HOME="$(/usr/libexec/java_home)"

### HADOOP Environment variables
export HADOOP_PREFIX="/usr/local/Cellar/hadoop/2.2.0"
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/libexec/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX

export CLASSPATH=$CLASSPATH:.
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexec/share/hadoop/common/hadoop-common-2.2.0.jar
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexec/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar

3)配置HDFS

<configuration>
  <property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/Cellar/hadoop/2.2.0/hdfs/datanode</value>
<description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
  </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/Cellar/hadoop/2.2.0/hdfs/namenode</value>
    <description>Path on the local filesystem where the NameNode stores the namespace and transaction logs persistently.</description>
  </property>
</configuration>

3)配置core-site.xml

<!-- Let Hadoop modules know where the HDFS NameNode is at! -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost/</value>
    <description>NameNode URI</description>
  </property>

4)配置yarn-site.xml

<configuration>
   <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>128</value>
    <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
    <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
    <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>2</value>
    <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.     </description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>4096</value>
    <description>Physical memory, in MB, to be made available to running containers</description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>2</value>
    <description>Number of CPU cores that can be allocated for containers.</description>
  </property>
</configuration>

5)然后我尝试使用以下格式来格式化namenode:

$HADOOP_PREFIX/bin/hdfs namenode -format

这给了我错误:错误:无法找到或加载主类org.apache.hadoop.hdfs.server.namenode.NameNode。

我查看了hdfs代码,运行它的行基本上等于调用

$java org.apache.hadoop.hdfs.server.namenode.NameNode.

所以认为这是一个类路径问题,我尝试了一些事情

a)将hadoop-common-2.2.0.jar和hadoop-hdfs-2.2.0.jar添加到类路径中,如上面的.bash_profile脚本中所示

b)添加线

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

根据本教程的建议,我的.bash_profile。(我后来删除它,因为它似乎没有任何帮助)

c)我还考虑编写一个shell脚本,将$ HADOOP_HOME / libexec / share / hadoop中的每个jar添加到$ HADOOP_CLASSPATH中,但这似乎是不必要的,并且容易出现未来的问题。

知道为什么我一直得到错误:无法找到或加载主类org.apache.hadoop.hdfs.server.namenode.NameNode? 提前致谢。

由于brew包的布局方式,您需要将HADOOP_PREFIX指向包中的libexec文件夹:

export HADOOP_PREFIX="/usr/local/Cellar/hadoop/2.2.0/libexec"

然后,您将从conf目录的声明中删除libexec:

export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop

我有同样的问题,这是因为权利“根”。 像以前一样用sudo运行hadoophdfs命令:

sudo hdfs namenode -format

尝试$HADOOP_PREFIX/bin/hadoop namenode -format而不是$ HADOOP_PREFIX / bin / hdfs namenode -format

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM