简体   繁体   中英

Fast loading into SQL server with python

I have a dataframe with six columns and around 27000 rows.
I'm trying to load this dataframe into my SQL Server (not localhost), but it takes forever.

Does anyone know of any faster way to load than this -
27000 rows shouldnt take long. No problem when reading from the database. :-)

for index, row in predict.iterrows():
        params = [(row.account_no, row.group_company, row.customer_type, row.invoice_date, row.lower, row.upper)]
        cursor.fast_executemany = True
        cursor.executemany("INSERT INTO ML.predictions (account_no,group_company,customer_type,invoice_date,lower, upper) values(?,?,?,?,?,?)",
                       params)
      bachelor.commit()

ANSWER

records = [str(tuple(x)) for x in predict.values]


insert_ = """

INSERT INTO ml.predictions(account_no, group_company, customer_type, invoice_date, lower, upper) VALUES
 
"""


    def chunker(seq, size):
        return (seq[pos:pos + size] for pos in range(0, len(seq), size))
    
    for batch in chunker(records, 1000):
        rows = ','.join(batch)
        insert_rows = insert_ + rows
        cursor.execute(insert_rows)
        bachelor.commit()

Thank for your answers - I tried them all besides using JSON, it might be me that its wrong with.

This was my solution

records = [str(tuple(x)) for x in predict.values]


insert_ = """

INSERT INTO ml.predictions(account_no, group_company, customer_type, invoice_date, lower, upper) VALUES
 
"""


def chunker(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

for batch in chunker(records, 1000):
    rows = ','.join(batch)
    insert_rows = insert_ + rows
    cursor.execute(insert_rows)
    bachelor.commit()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM