简体   繁体   English

来自 hbase/filesystem 的 hadoop namenode 连接中 EOF 异常的含义是什么?

[英]What is the meaning of EOF exceptions in hadoop namenode connections from hbase/filesystem?

This is both a general question about java EOF exceptions, as well as Hadoop's EOF exception which is related to jar interoperability.这既是关于 java EOF 异常的一般问题,也是与 jar 互操作性相关的 Hadoop 的 EOF 异常。 Comments and answers on either topic are acceptable.可以接受对任一主题的评论和回答。

Background背景

I'm noting some threads which discuss a cryptic exception, which is ultimately caused by a "readInt" method.我注意到一些讨论神秘异常的线程,这最终是由“readInt”方法引起的。 This exception seems to have some generic implications which are independent of hadoop, but ultimately, is caused by interoperability of Hadoop jars.此异常似乎具有一些与 hadoop 无关的通用含义,但最终是由 Hadoop jars 的互操作性引起的。

In my case, I'm getting it when I try to create a new FileSystem object in hadoop, in java.就我而言,当我尝试在 hadoop 和 java 中创建新的文件系统 object 时,我得到了它。

Question问题

My question is: What is happening and why does the reading of an integer throw an EOF exception?我的问题是:发生了什么,为什么读取 integer 会引发 EOF 异常? What "File" is this EOF exception referring to, and why would such an exception be thrown if two jars are not capable of interoperating?这个EOF异常指的是什么“文件”,如果两个jars不能互操作,为什么会抛出这样的异常?

Secondarily, I also would like to know how to fix this error so i can connect to and read/write hadoops filesystem using the hdfs protocol with the java api, remotely....其次,我还想知道如何修复此错误,以便我可以使用 hdfs 协议和 java api 远程连接和读/写 hadoops 文件系统。...

java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
    at sb.HadoopRemote.main(HadoopRemote.java:35)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

Regarding hadoop : I fixed the error ! 关于hadoop:我修正了错误! You need to make sure the core-site.xml is serving to 0.0.0.0 instead of 127.0.0.1(localhost). 您需要确保core-site.xml服务于0.0.0.0而不是127.0.0.1(localhost)。

If you get the EOF exception, it means that the port is not accessible externally on that ip, so there is no data to read between the hadoop client / server ipc. 如果您收到EOF异常,则表示该IP无法从外部访问该端口,因此hadoop客户端/服务器ipc之间没有可读取的数据。

套接字上的EOFException意味着没有更多数据,并且对等方已关闭连接。

Make sure your device has its VPN off.确保您的设备已关闭 VPN。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM