[英]Cannot Read a file from HDFS using Spark
I have installed cloudera CDH 5 by using cloudera manager. 我已经使用cloudera管理器安装了cloudera CDH 5。
I can easily do 我可以轻松做到
hadoop fs -ls /input/war-and-peace.txt
hadoop fs -cat /input/war-and-peace.txt
this above command will print the whole txt file on the console. 上面的命令将在控制台上打印整个txt文件。
now I start the spark shell and say 现在我启动火花壳并说
val textFile = sc.textFile("hdfs://input/war-and-peace.txt")
textFile.count
Now I get an error 现在我得到一个错误
Spark context available as sc. Spark上下文可用作sc。
scala> val textFile = sc.textFile("hdfs://input/war-and-peace.txt")
2014-12-14 15:14:57,874 INFO [main] storage.MemoryStore (Logging.scala:logInfo(59)) - ensureFreeSpace(177621) called with curMem=0, maxMem=278302556
2014-12-14 15:14:57,877 INFO [main] storage.MemoryStore (Logging.scala:logInfo(59)) - Block broadcast_0 stored as values in memory (estimated size 173.5 KB, free 265.2 MB)
textFile: org.apache.spark.rdd.RDD[String] = hdfs://input/war-and-peace.txt MappedRDD[1] at textFile at <console>:12
scala> textFile.count
2014-12-14 15:15:21,791 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 0 time(s); maxRetries=45
2014-12-14 15:15:41,905 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 1 time(s); maxRetries=45
2014-12-14 15:16:01,925 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 2 time(s); maxRetries=45
2014-12-14 15:16:21,983 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 3 time(s); maxRetries=45
2014-12-14 15:16:42,001 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 4 time(s); maxRetries=45
2014-12-14 15:17:02,062 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 5 time(s); maxRetries=45
2014-12-14 15:17:22,082 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 6 time(s); maxRetries=45
2014-12-14 15:17:42,116 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 7 time(s); maxRetries=45
2014-12-14 15:18:02,138 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 8 time(s); maxRetries=45
2014-12-14 15:18:22,298 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 9 time(s); maxRetries=45
2014-12-14 15:18:42,319 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 10 time(s); maxRetries=45
2014-12-14 15:19:02,354 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 11 time(s); maxRetries=45
2014-12-14 15:19:22,373 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 12 time(s); maxRetries=45
2014-12-14 15:19:42,424 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 13 time(s); maxRetries=45
2014-12-14 15:20:02,446 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 14 time(s); maxRetries=45
2014-12-14 15:20:22,512 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 15 time(s); maxRetries=45
2014-12-14 15:20:42,515 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 16 time(s); maxRetries=45
2014-12-14 15:21:02,550 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 17 time(s); maxRetries=45
2014-12-14 15:21:22,558 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 18 time(s); maxRetries=45
2014-12-14 15:21:42,683 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 19 time(s); maxRetries=45
2014-12-14 15:22:02,702 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 20 time(s); maxRetries=45
2014-12-14 15:22:22,832 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 21 time(s); maxRetries=45
2014-12-14 15:22:42,852 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 22 time(s); maxRetries=45
2014-12-14 15:23:02,974 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 23 time(s); maxRetries=45
2014-12-14 15:23:22,995 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 24 time(s); maxRetries=45
2014-12-14 15:23:43,109 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 25 time(s); maxRetries=45
2014-12-14 15:24:03,128 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 26 time(s); maxRetries=45
2014-12-14 15:24:23,250 INFO [main] ipc.Client (Client.java:handleConnectionTimeout(814)) - Retrying connect to server: input/92.242.140.21:8020. Already tried 27 time(s); maxRetries=45
java.net.ConnectException: Call From dn1home/192.168.1.21 to input:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
Why did I get this error? 为什么会出现此错误? I am able to read the same file by using hadoop commands?
我可以使用hadoop命令读取相同的文件吗?
Here is the solution 这是解决方案
sc.textFile("hdfs://nn1home:8020/input/war-and-peace.txt")
How did I find out nn1home:8020? 我如何找到nn1home:8020?
Just search for the file core-site.xml
and look for xml element fs.defaultFS
只需搜索文件
core-site.xml
并查找xml元素fs.defaultFS
if you want to use sc.textFile("hdfs://...")
you need to give the full path(absolute path), in your example that would be "nn1home:8020/.." 如果要使用
sc.textFile("hdfs://...")
,则需要提供完整路径(绝对路径),在您的示例中为“ nn1home:8020 /。”。
If you want to make it simple, then just use sc.textFile("hdfs:/input/war-and-peace.txt")
如果要使其简单,则只需使用
sc.textFile("hdfs:/input/war-and-peace.txt")
That's only one /
那只是一个
/
This will work: 这将起作用:
val textFile = sc.textFile("hdfs://localhost:9000/user/input.txt")
Here, you can take localhost:9000
from hadoop core-site.xml
config file's fs.defaultFS
parameter value. 在这里,您可以从hadoop
core-site.xml
配置文件的fs.defaultFS
参数值中获取localhost:9000
。
You are not passing a proper url string. 您没有传递正确的网址字符串。
hdfs://
- protocol type hdfs://
-协议类型 localhost
- ip address(may be different for you eg. - 127.56.78.4) localhost
-IP地址(可能与您不同,例如-127.56.78.4) 54310
- port number 54310
端口号 /input/war-and-peace.txt
- Complete path to the file you want to load. /input/war-and-peace.txt
要加载的文件的完整路径。 Finally the URL should be like this 最后,URL应该是这样的
hdfs://localhost:54310/input/war-and-peace.txt
I'm also using CDH5. 我也在使用CDH5。 For me the full path i,e "hdfs://nn1home:8020" is not working for some strange reason.
对我来说,完整的路径“ hdfs:// nn1home:8020”由于某种奇怪的原因而无法正常工作。 Most of the example shows the path like that.
大多数示例显示了这样的路径。
I used the command like 我像这样使用命令
val textFile=sc.textFile("hdfs:/input1/Card_History2016_3rdFloor.csv")
o/p of above command: 以上命令的o / p:
textFile: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at textFile at <console>:22
textFile.count
res1: Long = 58973
and this works fine for me. 这对我来说很好。
这对我有用
logFile = "hdfs://localhost:9000/sampledata/sample.txt"
If you started spark with HADOOP_HOME set in spark-env.sh, spark would know where to look for hdfs configuration files. 如果您在spark-env.sh中设置了HADOOP_HOME来启动spark,则spark将知道在哪里查找hdfs配置文件。
In this case spark already knows location of your namenode/datanode and only below should work fine to access hdfs files; 在这种情况下,spark已经知道您的namenode / datanode的位置,只有下面的名称才能正常访问hdfs文件;
sc.textFie("/myhdfsdirectory/myfiletoprocess.txt")
You can create your myhdfsdirectory as below; 您可以如下创建myhdfs目录;
hdfs dfs -mkdir /myhdfsdirectory
and from your local file system you can move your myfiletoprocess.txt to hdfs directory using below command 从本地文件系统,您可以使用以下命令将myfiletoprocess.txt移至hdfs目录
hdfs dfs -copyFromLocal mylocalfile /myhdfsdirectory/myfiletoprocess.txt
val conf = new SparkConf().setMaster("local[*]").setAppName("HDFSFileReader")
conf.set("fs.defaultFS", "hdfs://hostname:9000")
val sc = new SparkContext(conf)
val data = sc.textFile("hdfs://hostname:9000/hdfspath/")
data.saveAsTextFile("C:\\dummy\")
the above code reads all hdfs files from directory and save it locally in c://dummy folder. 上面的代码从目录中读取所有hdfs文件,并将其本地保存在c:// dummy文件夹中。
It might be issue of file path or URL and hdfs port as well. 也可能是文件路径或URL和hdfs端口问题。
Solution: First open core-site.xml
file from location $HADOOP_HOME/etc/hadoop
and check the value of property fs.defaultFS
. 解决方案:首先从
$HADOOP_HOME/etc/hadoop
位置打开core-site.xml
文件,然后检查属性fs.defaultFS
的值。 Let's say the value is hdfs://localhost:9000
and the file location in hdfs is /home/usr/abc/fileName.txt
. 假设值是
hdfs://localhost:9000
并且hdfs中的文件位置是/home/usr/abc/fileName.txt
。 Then, the file URL will be : hdfs://localhost:9000/home/usr/abc/fileName.txt
and following command used to read file from hdfs: 然后,文件URL将为:
hdfs://localhost:9000/home/usr/abc/fileName.txt
,以下命令用于从hdfs读取文件:
var result= scontext.textFile("hdfs://localhost:9000/home/usr/abc/fileName.txt", 2)
Get the fs.defaultFS URL from core-site.xml(/etc/hadoop/conf) and read the file as below. 从core-site.xml(/ etc / hadoop / conf)获取fs.defaultFS URL并按以下方式读取文件。 In my case, fs.defaultFS is hdfs://quickstart.cloudera:8020
就我而言,fs.defaultFS是hdfs://quickstart.cloudera:8020
txtfile=sc.textFile('hdfs://quickstart.cloudera:8020/user/cloudera/rddoutput') txtfile.collect()
txtfile = sc.textFile('hdfs://quickstart.cloudera:8020 / user / cloudera / rddoutput')txtfile.collect()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.