I'm trying to read an orc file from s3 into a Pandas dataframe. In my version of pandas there is no pd.read_orc(...).
I tried to do this:
session = boto3.Session()
s3_client = session.client('s3')
s3_key = "my_object_key"
data = s3_client.get_object(
Bucket='my_bucket',
Key=s3_key
)
orc_bytes = data['Body'].read()
Which reads the object as bytes.
Now I try to do this:
orc_data = pyorc.Reader(orc_bytes)
But it fails because:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-deaabe8232ce> in <module>
----> 1 data = pyorc.Reader(orc_data)
/anaconda3/envs/linear_opt_3.7/lib/python3.7/site-packages/pyorc/reader.py in __init__(self, fileo, batch_size, column_indices, column_names, struct_repr, converters)
65 conv = converters
66 super().__init__(
---> 67 fileo, batch_size, column_indices, column_names, struct_repr, conv
68 )
69
TypeError: Parameter must be a file-like object, but `<class 'bytes'>` was provided
Eventually I would like to land it as.csv or something I can read into pandas. Is there a better way to do this?
Try wrapping the S3 data in an io.BytesIO
:
import io
orc_bytes = io.BytesIO(data['Body'].read())
orc_data = pyorc.Reader(orc_bytes)
Here's the function that solves the problem end to end:
import boto3
import pyorc
import io
import pandas as pd
session = boto3.Session()
s3_client = session.client('s3')
def load_s3_orc_to_local_df(key, bucket):
data = s3_client.get_object(Bucket=bucket, Key=key)
orc_bytes = io.BytesIO(data['Body'].read())
reader = pyorc.Reader(orc_bytes)
schema = reader.schema
columns = [item for item in schema.fields]
rows = [row for row in reader]
df = pd.DataFrame(data=rows, columns=columns)
return df
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.