简体   繁体   中英

Writing a CSV file to Redshift using Python

When i was trying to execute the below script i am receiving an error "too many arguments", my csv file had around 28 columns and 30 rows. All connections are working fine its reading correctly and no other issues, as per the log i understand that i cant write more than 2 or 3 columns to the table in redshift

import psycopg2
import csv
import time
import datetime
import pandas as pd
import sys
reload (sys)
sys.setdefaultencoding('utf8')
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)
print('script sterts')
connect=psycopg2.connect(dbname='db',host='***********',port=5439,user='****',password='********')
cur=connect.cursor()
print('begin execute')
up_pgm_list = pd.read_csv ('/prod/user/home/dqe933/FNI.csv')
with open('/prod/user/home/dqe933/FNI.csv', 'Ur') as csvfile:
         spamreader=csv.reader(csvfile,delimiter=',') 
         for row in spamreader:
                 cur.execute("""INSERT INTO UD_INTERIM.dqe933_fni_new(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,
          col12,col13,col14,col15,col16,col17,col18,col19,col20,col21,col22,col23,col24,col25,col26,col27,col28)values(?,?,?,
          ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)""",*row)

print('complete')`enter code here`

row shouldn't be unpacked. Make sure it's an iterable consisting of 28 items.

cur.execute("""INSERT INTO UD_INTERIM.dqe933_fni_new(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19,col20,col21,col22,col23,col24,col25,col26,col27,col28) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)""", row)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM