[英]How to open a parquet file in HDFS with Python?
I am looking to read a parquet file that is stored in HDFS and I am using Python to do this. 我希望读取存储在HDFS中的实木复合地板文件,并且正在使用Python进行此操作。 I have this code below but it does not open the files in HDFS.
我在下面有此代码,但无法在HDFS中打开文件。 Can you help me change the code to do this?
您能帮我更改代码来做到这一点吗?
sc = spark.sparkContext
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.parquet('path-to-file/commentClusters.parquet')
Also, I am looking to save the Dataframe as a CSV file as well. 另外,我也希望将数据框另存为CSV文件。
have a try with 尝试一下
sqlContext.read.parquet("hdfs://<host:port>/path-to-file/commentClusters.parquet")
To find out the host and port, just search for the file core-site.xml and look for xml element fs.defaultFS (eg $HADOOP_HOME/etc/hadoop/core-site.xml) 要查找主机和端口,只需搜索文件core-site.xml并查找xml元素fs.defaultFS(例如$ HADOOP_HOME / etc / hadoop / core-site.xml)
To make it simple, try 为了简单起见,请尝试
sqlContext.read.parquet("hdfs:////path-to-file/commentClusters.parquet")
or 要么
sqlContext.read.parquet("hdfs:/path-to-file/commentClusters.parquet")
Referring Cannot Read a file from HDFS using Spark 引用无法使用Spark从HDFS读取文件
To save as csv, try 要另存为csv,请尝试
df_result.write.csv(path=res_path) # possible options: header=True, compression='gzip'
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.