簡體   English   中英

如何使用 python 中的 spark dataframe 從 AWS S3 讀取鑲木地板文件(pyspark)

[英]How to read parquet files from AWS S3 using spark dataframe in python (pyspark)

我正在嘗試讀取存儲在 s3 存儲桶中的一些鑲木地板文件。 我正在使用以下代碼:

s3 = boto3.resource('s3')

# get a handle on the bucket that holds your file
bucket = s3.Bucket('bucket_name')

# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key = 'file/key/083b661babc54dd89139449d15fa22dd.snappy.parquet')

# get the object
response = obj.get()

# read the contents of the file and split it into a list of lines
lines = response[u'Body'].read().split('\n')

當試圖執行最后一行代碼lines = response[u'Body'].read().split('\n')我收到以下錯誤:

TypeError: a bytes-like object is required, not 'str'

我不確定如何解決這個問題。

我不得不使用以下代碼,而不是 boto3:

myAccessKey = 'your key' 
mySecretKey = 'your key'

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 pyspark-shell'

import pyspark
sc = pyspark.SparkContext("local[*]")

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

hadoopConf = sc._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAccessKeyId", myAccessKey)
hadoopConf.set("fs.s3.awsSecretAccessKey", mySecretKey)

df = sqlContext.read.parquet("s3://bucket-name/path/")

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM