[英]write table cell real-time python
I would like to loop trough a database, find the appropriate values and insert them in the appropriate cell in a separate file. 我想循环遍历数据库,找到适当的值,并将它们插入单独文件中的适当单元格中。 It maybe a csv, or any other human-readable format.
它可能是csv或其他任何人类可读的格式。 In pseudo-code:
用伪代码:
for item in huge_db:
for list_of_objects_to_match:
if itemmatch():
if there_arent_three_matches_yet_in_list():
matches++
result=performoperationonitem()
write_in_file(result, row=object_to_match_id, col=matches)
if matches is 3:
remove_this_object_from_object_to_match_list()
can you think of any way other than going every time through all the outputfile line by line? 除了每次逐行遍历所有outputfile之外,您还能想到什么方法吗? I don't even know what to search for... even better, there are better ways to find three matching objects in a db and have the results in real-time?
我什至不知道要搜索什么...甚至更好,还有更好的方法可以在数据库中找到三个匹配的对象并实时获得结果? (the operation will take a while, but I'd like to see the results popping out RT)
(该操作将花费一些时间,但我希望看到结果弹出RT)
Assuming itemmatch()
is a reasonably simple function, this will do what I think you want better than your pseudocode: 假设
itemmatch()
是一个相当简单的函数,那么我认为您会比伪代码做得更好:
for match_obj in list_of_objects_to_match:
db_objects = query_db_for_matches(match_obj)
if len(db_objects) >= 3:
result=performoperationonitem()
write_in_file(result, row=match_obj.id, col=matches)
else:
write_blank_line(row=match_obj.id) # if you want
Then the trick becomes writing the query_db_for_matches()
function. 然后,技巧就变成了编写
query_db_for_matches()
函数。 Without detail, I'll assume you're looking for objects that match in one particular field, call it type
. 没有细节,我假设您正在寻找在一个特定字段中匹配的对象,将其称为
type
。 In pymongo such a query would look like: 在pymongo中,这样的查询看起来像:
def query_db_for_matches(match_obj):
return pymongo_collection.find({"type":match_obj.type})
To get this to run efficiently, make sure your database has an index on the field(s) you're querying on by first calling: 为了使它高效运行,请首先调用以下命令,以确保数据库在要查询的字段上具有索引:
pymongo_collection.ensure_index({"type":1})
The first time you call ensure_index
it could take a long time for a huge collection. 首次调用
ensure_index
,可能需要很长时间才能收集大量数据。 But each time after that it will be fast -- fast enough that you could even put it into query_db_for_matches
before your find
and it would be fine. 但是此后每次都会非常快-足够快,您甚至可以在
find
之前将其放入query_db_for_matches
,这会很好。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.