簡體   English   中英

嘗試格式化namenode時無法找到或加載主類; 在MAC OS X 10.9.2上安裝hadoop

[英]Could not find or load main class when trying to format namenode; hadoop installation on MAC OS X 10.9.2

我正在嘗試使用hadoop在我的MAC OS X 10.9.2上完成開發單節點集群設置。 我已經嘗試了各種在線教程,其中最新的就是這個教程。 總結一下我的所作所為:

1) $ brew install hadoop

這在/usr/local/Cellar/hadoop/2.2.0中安裝了hadoop 2.2.0

2)配置的環境變量。 這是我的.bash_profile的相關部分:

### Java_HOME 
export JAVA_HOME="$(/usr/libexec/java_home)"

### HADOOP Environment variables
export HADOOP_PREFIX="/usr/local/Cellar/hadoop/2.2.0"
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/libexec/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX

export CLASSPATH=$CLASSPATH:.
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexec/share/hadoop/common/hadoop-common-2.2.0.jar
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexec/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar

3)配置HDFS

<configuration>
  <property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/Cellar/hadoop/2.2.0/hdfs/datanode</value>
<description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
  </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/Cellar/hadoop/2.2.0/hdfs/namenode</value>
    <description>Path on the local filesystem where the NameNode stores the namespace and transaction logs persistently.</description>
  </property>
</configuration>

3)配置core-site.xml

<!-- Let Hadoop modules know where the HDFS NameNode is at! -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost/</value>
    <description>NameNode URI</description>
  </property>

4)配置yarn-site.xml

<configuration>
   <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>128</value>
    <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
    <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
    <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>2</value>
    <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.     </description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>4096</value>
    <description>Physical memory, in MB, to be made available to running containers</description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>2</value>
    <description>Number of CPU cores that can be allocated for containers.</description>
  </property>
</configuration>

5)然后我嘗試使用以下格式來格式化namenode:

$HADOOP_PREFIX/bin/hdfs namenode -format

這給了我錯誤:錯誤:無法找到或加載主類org.apache.hadoop.hdfs.server.namenode.NameNode。

我查看了hdfs代碼,運行它的行基本上等於調用

$java org.apache.hadoop.hdfs.server.namenode.NameNode.

所以認為這是一個類路徑問題,我嘗試了一些事情

a)將hadoop-common-2.2.0.jar和hadoop-hdfs-2.2.0.jar添加到類路徑中,如上面的.bash_profile腳本中所示

b)添加線

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

根據本教程的建議,我的.bash_profile。(我后來刪除它,因為它似乎沒有任何幫助)

c)我還考慮編寫一個shell腳本,將$ HADOOP_HOME / libexec / share / hadoop中的每個jar添加到$ HADOOP_CLASSPATH中,但這似乎是不必要的,並且容易出現未來的問題。

知道為什么我一直得到錯誤:無法找到或加載主類org.apache.hadoop.hdfs.server.namenode.NameNode? 提前致謝。

由於brew包的布局方式,您需要將HADOOP_PREFIX指向包中的libexec文件夾:

export HADOOP_PREFIX="/usr/local/Cellar/hadoop/2.2.0/libexec"

然后,您將從conf目錄的聲明中刪除libexec:

export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop

我有同樣的問題,這是因為權利“根”。 像以前一樣用sudo運行hadoophdfs命令:

sudo hdfs namenode -format

嘗試$HADOOP_PREFIX/bin/hadoop namenode -format而不是$ HADOOP_PREFIX / bin / hdfs namenode -format

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM