简体   繁体   English

Datanode 无法在 Hadoop 单节点集群上运行 windows

[英]Datanode not working on Hadoop single node cluster on windows

There are many similar questions on stack overflow but none of them solves my problem.关于堆栈溢出有很多类似的问题,但没有一个能解决我的问题。

I'm trying to start my namenode and datanode, of which namenode starts working but datanode fails alongwith resource manager and node manager .我正在尝试启动我的名称节点和数据节点,其中名称节点开始工作但数据节点与资源管理器和节点管理器一起失败 Here is the error that shows up:这是显示的错误:

2021-06-17 15:44:09,513 ERROR datanode.DataNode: Exception in secureMain org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0 at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2756) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2900) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.8821324 2021-06-17 15:44:09,513 错误 datanode.DataNode:secureMain org.apache.hadoop.util.DiskChecker$DiskErrorException 中的异常:失败的卷太多 - 当前有效卷:0,配置的卷:1,卷失败:1,卷failures tolerated: 0 at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799) at org .apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2756) at org.apache.hadoop.hdfs.server.datanode .DataNode.secureMain(DataNode.java:2900)在org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.8821324 6945888:2924) 2021-06-17 15:44:09,518 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0 2021-06-17 15:44:09,522 INFO datanode.DataNode: SHUTDOWN_MSG: 6945888:2924) 2021-06-17 15:44:09,518 INFO util.ExitUtil:退出状态为 1:org.apache.hadoop.util.DiskChecker$DiskErrorException:失败的卷太多 - 当前有效卷:0,配置的卷:1,卷失败:1,容许的卷故障:0 2021-06-17 15:44:09,522 INFO datanode.DataNode:SHUTDOWN_MSG:

Here is my hdfs-site.xml :这是我的hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
     <name>dfs.replication</name>
     <value>1</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>C:\Users\username\Documents\hadoop-3.2.1\data\dfs\namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>C:\Users\username\Documents\hadoop-3.2.1\data\dfs\datanode</value>
  </property>
  <property>
    <name>dfs.datanode.failed.volumes.tolerated</name>
    <value>0</value>
  </property>
</configuration>

What could be the solution?有什么解决办法?

The question is answered here:问题在这里得到回答:

https://stackoverflow.com/a/58924939/14194692 https://stackoverflow.com/a/58924939/14194692

This answer is not accepted on the question but I tried it and it worked.这个问题不接受这个答案,但我试过了,它奏效了。 Tada.多田。

Not deleting my question because none of the question is asked as clearly as this one I believe.没有删除我的问题,因为我认为没有一个问题像这个问题一样清楚。 I hope it helps other people.我希望它能帮助其他人。

Cheers.干杯。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM