简体   繁体   中英

Pandas read_sql with chunksize gives argument error with MySQL data

I'm trying to read a large dataset (13 million rows) from a MySQL database into pandas (0.17.1). Following one of the suggestions online I used the chunksize parameter to do this.

db = pymysql.connect(HOST,           # localhost
                     port=PORT,      # port
                     user=USER,      # username
                     password=PASSW, # password
                     db=DATABASE)    # name of the data base

df = pd.DataFrame()
query = "SELECT * FROM `table`;"
for chunks in pd.read_sql(query, con=db, chunksize=100000):
    df = df.append(chunks)

But everytime I run this I'm getting a TypeError: Argument 'rows' has incorrect type (expected list, got tuple) error.

This was working when I didn't use the chunksize parameter and hence not producing a generator object. And I can see that the mysql is returning a tuple-of-tuples instead of a list-of-tuples .

So, my question is why does the query work in the normal case and what do I do to make sure I'm getting a list-of-tuples from the database so that I can work with it?

The full traceback looks like this

TypeError                                 Traceback (most recent call last)
<ipython-input-20-efe94dcd2c70> in <module>()
      8 df_horses = pd.DataFrame()
      9 query = "SELECT * FROM `horses`;"
---> 10 for chunks in pd.read_sql(query, con=db, chunksize=10000):
     11     df_horses = df_horses.append(chunks)
     12 print df_horses.shape

/home/ubuntu/anaconda2/lib/python2.7/site-packages/pandas/io/sql.pyc in _query_iterator(cursor, chunksize, columns, index_col, coerce_float, parse_dates)
   1563                 yield _wrap_result(data, columns, index_col=index_col,
   1564                                    coerce_float=coerce_float,
-> 1565                                    parse_dates=parse_dates)
   1566 
   1567     def read_query(self, sql, index_col=None, coerce_float=True, params=None,

/home/ubuntu/anaconda2/lib/python2.7/site-packages/pandas/io/sql.pyc in _wrap_result(data, columns, index_col, coerce_float, parse_dates)
    135 
    136     frame = DataFrame.from_records(data, columns=columns,
--> 137                                    coerce_float=coerce_float)
    138 
    139     _parse_date_columns(frame, parse_dates)

/home/ubuntu/anaconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in from_records(cls, data, index, exclude, columns, coerce_float, nrows)
    967         else:
    968             arrays, arr_columns = _to_arrays(data, columns,
--> 969                                              coerce_float=coerce_float)
    970 
    971             arr_columns = _ensure_index(arr_columns)

/home/ubuntu/anaconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in _to_arrays(data, columns, coerce_float, dtype)
   5277     if isinstance(data[0], (list, tuple)):
   5278         return _list_to_arrays(data, columns, coerce_float=coerce_float,
-> 5279                                dtype=dtype)
   5280     elif isinstance(data[0], collections.Mapping):
   5281         return _list_of_dict_to_arrays(data, columns,

/home/ubuntu/anaconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in _list_to_arrays(data, columns, coerce_float, dtype)
   5355 def _list_to_arrays(data, columns, coerce_float=False, dtype=None):
   5356     if len(data) > 0 and isinstance(data[0], tuple):
-> 5357         content = list(lib.to_object_array_tuples(data).T)
   5358     else:
   5359         # list of lists

TypeError: Argument 'rows' has incorrect type (expected list, got tuple)

I'm not aware of the reason behind "pd.read_sql" not returning list of tuples when chunksize is used. Infact "pd.read_sql" does not throw any error with pandas version '0.23.4'. But I also tried with pandas version '0.16.2' where I was encountered with same error as yours. So please do check your pandas version before scripting. But I do know a way to overcome this error in pandas version '0.16.2'.

pandas version 0.16.2

import pymysql as ps
import pandas as pd
db=ps.connect(user="user_name", passwd="password", host = 'host_name', 
              db='database_name')
cursor=db.cursor()
df=pd.DataFrame(columns=['column_name1','column_name2'])
query=""" select column_name1,column_name2 from table_name limit {0},{1}; """
limit=1000000
offset=0
try:
while True:
    cursor.execute(query.format(offset,limit))
    rows=pd.DataFrame(list(cursor.fetchall()),columns= 
                         ['column_name1','column_name2'])
    df=pd.concat([df,rows],ignore_index=True)
    offset=offset+limit
    if len(rows['column_name1'])==0:
        break
except:
    pass

Made changes to your existing code, Append the chunks to a list then concat it to pandas DF.

df_lst=[]
df = pd.DataFrame()
query = "SELECT * FROM `table`;"
for chunks in pd.read_sql_query(query, con=db, chunksize=100000):
    df_lst.append(chunk)

df = pd.concat(dfl, ignore_index=True)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM