![](/img/trans.png)
[英]Get list of files from hdfs (hadoop) directory using python script
[英]Reading files from HDFS directory and creating a RDD in Spark using Python
我有一些文本文件,我想使用這些文件創建一個 RDD。 文本文件存儲在“Folder_1”和“Folder_2”中,這些文件夾存儲在“text_data”文件夾中
當文件存儲在本地存儲中時,以下代碼有效:
#Reading the corpus as an RDD
data_folder = '/home/user/text_data'
def read_data(data_folder):
data = sc.parallelize([])
for folder in os.listdir(data_folder):
for txt_file in os.listdir( data_folder + '/' + folder ):
temp = open( data_folder + '/' + folder + '/' + txt_file)
temp_da = temp.read()
temp_da = unicode(temp_da, errors = 'ignore')
temp.close()
a = [ ( folder, temp_da) ]
data = data.union(sc.parallelize( a ) )
return data
函數 read_data 返回一個由文本文件組成的 RDD。
如果將“text_data”文件夾移動到 HDFS 目錄,如何執行上述功能?
代碼將部署在運行 SPARK 的 Hadoop-Yarn 集群中。
在下面替換您的 hadoop 環境的 namenode
hdfs_folder = 'hdfs://<namenode>/home/user/text_data/*'
def read_data(hdfs_folder):
data = sc.parallelize([])
data = sc.textFile(hdfs_folder)
return data
這是在 Spark 1.6.2 版本中測試過的
>>> hdfs_folder = 'hdfs://coord-1/tmp/sparktest/0.txt'
>>> def read_data(hdfs_folder):
... data = sc.parallelize([])
... data = sc.textFile(hdfs_folder)
... return data
...
>>> read_data(hdfs_folder).count()
17/03/15 00:30:57 INFO SparkContext: Created broadcast 14 from textFile at NativeMethodAccessorImpl.java:-2
17/03/15 00:30:57 INFO SparkContext: Starting job: count at <stdin>:1
17/03/15 00:30:57 INFO SparkContext: Created broadcast 15 from broadcast at DAGScheduler.scala:1012
189
>>>
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.