[英]Python: Export Large SQL Server Query Result to .txt File
What is the most performance-efficient and memory-efficient way to copy a SQL Server query result of > 600,000,000 rows to a local .txt
file?将超过 600,000,000 行的 SQL Server 查询结果复制到本地.txt
文件的性能和内存效率最高的方法是什么? You may assume that I do not have have user permissions to export from SQL Server Management Studio.您可能认为我没有从 SQL Server Management Studio 导出的用户权限。 For this reason, Python seems to be my best option.出于这个原因,Python 似乎是我最好的选择。
I am currently using the Python pyodbc
package:我目前正在使用 Python pyodbc
包:
connection = pyodbc.connect('Driver=DRIVER;' \
'Server=SERVER;' \
'Database=DATABASE;' \
'uid=USERNAME;' \
'pwd=PASSWORD')
cursor = connection.cursor()
try:
cursor.execute("SELECT * FROM %s" % table)
except:
print('===== WAITING ===== EXECUTE ERROR =====')
time.sleep(15)
cursor.execute("SELECT * FROM %s" % table)
try:
data = cursor.fetchall()
except:
print('===== WAITING ===== FETCH ERROR =====')
time.sleep(15)
data = cursor.fetchall()
with open(output_file, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f, delimiter=delimiter)
writer.writerow([x[0] for x in cursor.description]) # column headers
for row in data:
writer.writerow(row)
cursor.close()
Side note: my goal is to transfer several hundred SQL tables as .txt files to an Amazon S3 bucket.旁注:我的目标是将数百个 SQL 表作为 .txt 文件传输到 Amazon S3 存储桶。 Is there a better way to do that instead of downloading the file to a local drive and then uploading to S3?有没有更好的方法来代替将文件下载到本地驱动器然后上传到 S3?
It depends on the result set, but as a general rule, I'd use fetchmany
to grab a bunch of rows at a time instead of pulling everything into memory:这取决于结果集,但作为一般规则,我会使用fetchmany
抓取一堆行,而不是将所有内容都拉入内存:
fetch_rows = 1000
rows = cursor.fetchmany(fetch_rows)
while rows is not None:
for row in rows:
do_something()
rows = cursor.fetchmany(fetch_rows)
Good luck!祝你好运!
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.