简体   繁体   English

Python:即使长时间运行的批处理过程崩溃,如何保存数据?

[英]Python: How do I save data even when a long running batch process crashes?

I have a long running python batch process that is processing data from a REST API. 我有一个运行时间很长的python批处理过程,它正在处理REST API中的数据。 In case the program crashes or terminates for some reason I want to save info about the position of my last processing so I can restart and resume. 万一程序由于某种原因崩溃或终止,我想保存有关上次处理位置的信息,以便重新启动和继续。 I do not want to loose that info (store it on disk and not only in RAM). 我不想丢失该信息(将其存储在磁盘上,不仅存储在RAM中)。

How can I handle this if possible with a lightweight solution maybe without a database? 如果可能的话,如何在没有数据库的情况下使用轻量级解决方案处理此问题?

Thanks Philip 谢谢菲利普

You might want to look at the python module 'shelve'. 您可能需要查看python模块“ shelve”。 It allows you to save data on disk and then retrieve it when needed. 它允许您将数据保存在磁盘上,然后在需要时进行检索。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 以编程方式执行和终止python中长时间运行的批处理 - programmatically executing and terminating a long-running batch process in python Python 池 - 当我遇到超时异常时,如何保持 Python 进程运行? - Python Pool - How do I keep Python Process running when I get a timeout exception? Python长期运行的过程 - Python long running process 如何从运行在 ubuntu 上的 Apache 提供的 python 脚本生成一个长时间运行的 python 进程? - How can I spawn a long running python process from a python script served by Apache running on ubuntu? 批量保存过程,post_save没有运行? - batch save process, post_save not running? 如何在Python中在后台运行长时间运行的作业 - How do I run a long-running job in the background in Python 如何从 Python 中的 joblib 保存并行进程的结果? - How do I save the result of a parallel process from joblib in Python? 如何在 python(pandas) 中保存数据 - How do I save data in python(pandas) 多次运行 function 时,每次运行后如何保存数据? - When running a function multiple times how do i save the data after each time it is run? 如何停止运行特定Python脚本的进程? - How do i stop process running a certain Python script?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM