[英]How to fix memory error while importing a very large csv file to mongodb in python?
下面給出了將 pipe 分隔的 csv 文件導入到 monogdb 的代碼。
import csv
import json
from pymongo import MongoClient
url = "mongodb://localhost:27017"
client = MongoClient(url)
db = client.Office
customer = db.Customer
jsonArray = []
with open("Names.txt", "r") as csv_file:
csv_reader = csv.DictReader(csv_file, dialect='excel', delimiter='|', quoting=csv.QUOTE_NONE)
for row in csv_reader:
jsonArray.append(row)
jsonString = json.dumps(jsonArray, indent=1, separators=(",", ":"))
jsonfile = json.loads(jsonString)
customer.insert_many(jsonfile)
以下是我在運行上述代碼時遇到的錯誤。
Traceback (most recent call last):
File "E:\Anaconda Projects\Mongo Projects\Office Tool\csvtojson.py", line 16, in <module>
jsonString = json.dumps(jsonArray, indent=1, separators=(",", ":"))
File "C:\Users\Predator\anaconda3\lib\json\__init__.py", line 234, in dumps
return cls(
File "C:\Users\Predator\anaconda3\lib\json\encoder.py", line 201, in encode
chunks = list(chunks)
MemoryError
我如果在 for 循環下用一些縮進修改代碼。 MongoDB 會以相同的數據重新導入,而不會停止。
import csv
import json
from pymongo import MongoClient
url = "mongodb://localhost:27017"
client = MongoClient(url)
db = client.Office
customer = db.Customer
jsonArray = []
with open("Names.txt", "r") as csv_file:
csv_reader = csv.DictReader(csv_file, dialect='excel', delimiter='|', quoting=csv.QUOTE_NONE)
for row in csv_reader:
jsonArray.append(row)
jsonString = json.dumps(jsonArray, indent=1, separators=(",", ":"))
jsonfile = json.loads(jsonString)
customer.insert_many(jsonfile)
我建議您使用 pandas; 它通過設置 chunksize 參數來提供“分塊”模式,您可以根據 memory 的限制對其進行調整。 insert_many()
也更有效。
加上代碼變得更簡單:
import pandas as pd
filename = "Names.txt"
with pd.read_csv(filename, chunksize=1000, delimiter='|') as reader:
for chunk in reader:
db.mycollection.insert_many(chunk.to_dict('records'))
如果您發布文件示例,我可以更新以匹配。
memory 問題可以通過一次插入一條記錄來解決。
import csv
import json
from pymongo import MongoClient
url_mongo = "mongodb://localhost:27017"
client = MongoClient(url_mongo)
db = client.Office
customer = db.Customer
jsonArray = []
file_txt = "Text.txt"
rowcount = 0
with open(file_txt, "r") as txt_file:
csv_reader = csv.DictReader(txt_file, dialect="excel", delimiter="|", quoting=csv.QUOTE_NONE)
for row in csv_reader:
rowcount += 1
jsonArray.append(row)
for i in range(rowcount):
jsonString = json.dumps(jsonArray[i], indent=1, separators=(",", ":"))
jsonfile = json.loads(jsonString)
customer.insert_one(jsonfile)
print("Finished")
謝謝大家的想法
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.