简体   繁体   English

使用Python将CSV文件写入Redshift

[英]Writing a CSV file to Redshift using Python

When i was trying to execute the below script i am receiving an error "too many arguments", my csv file had around 28 columns and 30 rows. 当我尝试执行以下脚本时,我收到“参数过多”错误,我的csv文件大约有28列和30行。 All connections are working fine its reading correctly and no other issues, as per the log i understand that i cant write more than 2 or 3 columns to the table in redshift 所有连接都可以正常工作,并且可以正确读取并且没有其他问题,根据日志,我知道我无法在redshift中向表写入超过2或3列

import psycopg2
import csv
import time
import datetime
import pandas as pd
import sys
reload (sys)
sys.setdefaultencoding('utf8')
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)
print('script sterts')
connect=psycopg2.connect(dbname='db',host='***********',port=5439,user='****',password='********')
cur=connect.cursor()
print('begin execute')
up_pgm_list = pd.read_csv ('/prod/user/home/dqe933/FNI.csv')
with open('/prod/user/home/dqe933/FNI.csv', 'Ur') as csvfile:
         spamreader=csv.reader(csvfile,delimiter=',') 
         for row in spamreader:
                 cur.execute("""INSERT INTO UD_INTERIM.dqe933_fni_new(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,
          col12,col13,col14,col15,col16,col17,col18,col19,col20,col21,col22,col23,col24,col25,col26,col27,col28)values(?,?,?,
          ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)""",*row)

print('complete')`enter code here`

row shouldn't be unpacked. row不应该打开包装。 Make sure it's an iterable consisting of 28 items. 确保它是一个由28个项目组成的可迭代对象。

cur.execute("""INSERT INTO UD_INTERIM.dqe933_fni_new(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19,col20,col21,col22,col23,col24,col25,col26,col27,col28) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)""", row)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM