I am working on a Jupyter notebook from AWS EMR .
I am able to do this: pd.read_csv("s3:\\mypath\\xyz.csv')
.
However, if I try to open a pickle file like this, pd.read_pickle("s3:\\mypath\\xyz.pkl")
I am getting this error:
[Errno 2] No such file or directory: 's3://pvarma1/users/users/candidate_users.pkl'
Traceback (most recent call last):
File "/usr/local/lib64/python2.7/site-packages/pandas/io/pickle.py", line 179, in read_pickle
return try_read(path)
File "/usr/local/lib64/python2.7/site-packages/pandas/io/pickle.py", line 177, in try_read
lambda f: pc.load(f, encoding=encoding, compat=True))
File "/usr/local/lib64/python2.7/site-packages/pandas/io/pickle.py", line 146, in read_wrapper
is_text=False)
File "/usr/local/lib64/python2.7/site-packages/pandas/io/common.py", line 421, in _get_handle
f = open(path_or_buf, mode)
IOError: [Errno 2] No such file or d
However, I can see both xyz.csv
and xyz.pkl
in the same path! Can anyone help?
Pandas read_pickle
supports only local paths, unlike read_csv
. So you should be copying the pickle file to your machine before reading it in pandas.
Since read_pickle
does not support this, you can use smart_open
:
from smart_open import open
s3_file_name = "s3://bucket/key"
with open(s3_file_name, 'rb') as f:
df = pd.read_pickle(f)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.