[英]How to write a consumer for a huge generator that does not leak memory?
TL / DR:原因是ThreadPoolExecutor。 Python3中并发.futures.ThreadPoolExecutor的内存使用情况
这是一个运行所有路由算法的Python脚本(简化了很多),并且在此过程中它耗尽了所有内存。
我知道问题在于主函数不会返回,并且内部创建的对象不会被垃圾收集器清除。
我的主要问题:是否可以为返回的生成器编写使用者,以便清理数据? 还是应该只调用垃圾收集器实用程序?
# thread pool executor like in python documentation example
def table_process(callable, total):
with ThreadPoolExecutor(max_workers=threads) as e:
future_map = {
e.submit(callable, i): i
for i in range(total)
}
for future in as_completed(future_map):
if future.exception() is None:
yield future.result()
else:
raise future.exception()
@argh.dispatch_command
def main():
threads = 10
data = pd.DataFrame(...) # about 12K rows
# this function routes only one slice of sources/destinations
def _process_chunk(x:int) -> gpd.GeoDataFrame:
# slicing is more complex, but simplified here for presentation
# do cross-product and an http request to process the result
result_df = _do_process(grid[x], grid)
return result_df
# writing to geopackage
with fiona.open('/tmp/some_file.gpkg', 'w', driver='GPKG', schema=...) as f:
for results_df in table_process(_process_chunk, len(data)):
aggregated_df = results_df.groupby('...').aggregate({...})
f.writerecords(aggregated_df)
原来是ThreadPoolExecutor保留了工作程序,并且不释放内存。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.