简体   繁体   中英

Python - unable to read a large file

How do I read a large table from hdfs in jupyter-notebook as a pandas DataFrame? The script is launched through the docker image.

libraries:

  • sasl==0.2.1
  • thrift==0.11.0
  • thrift-sasl==0.4a1
  • Impyla==0.16.2
from impala.dbapi import connect 
from impala.util import as_pandas

impala_conn = connect(host='hostname', port=21050,
auth_mechanism='GSSAPI', 
                      timeout=100000, use_ssl=True, ca_cert=None, 
                      ldap_user=None, ldap_password=None, 
                      kerberos_service_name='impala')

This works.


import pandas as pd
df = pd.read_sql("select id, crt_mnemo from demo_db.stg_deals_opn LIMIT 100", impala_conn)
print(df)

This does not work. The operation hangs, does not give errors.


import pandas as pd
df = pd.read_sql("select id, crt_mnemo from demo_db.stg_deals_opn LIMIT 1000", impala_conn)
print(df)

This seems to be a problem with the number of rows you can move from impala using pandas read_sql function. I have the same problem but with lower limits than yours. You may want to contact the database admin to check the size. Here are other options: https://docs.cloudera.com/machine-learning/cloud/import-data/topics/ml-running-queries-on-impala-tables.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM