[英]Crontab spawning multiple instances of task instead of a single instance
I have a python script which needs to run every 5 minutes.我有一个 python 脚本,需要每 5 分钟运行一次。 I am calling this script via a
bash
file named activate.sh
:我通过名为
activate.sh
的bash
文件调用此脚本:
#!/bin/bash
../../env/bin/python3 ./run.py
Then to run this task every 5 minutes, I have a crontab job
configured as such:然后每 5 分钟运行一次这个任务,我有一个
crontab job
配置如下:
*/5 * * * * cd /var/www/usa/api/BackgroundTasks && /bin/bash ./activate.sh
I have literally the same setup running on 3 different servers 2 of which work just fine and activate the task only once.我实际上在 3 台不同的服务器上运行相同的设置,其中 2 台运行良好,并且只激活一次任务。 The one on the USA server however spawns multiple instances of
run.py
.然而,美国服务器上的那个会产生多个
run.py
实例。 This went so far that it nearly filled up the entire memory causing performance issues.到目前为止,它几乎填满了整个 memory,从而导致性能问题。
Here is the app.py
code but I don't think the issue is there, I think the issue is with cron
itself:这是
app.py
代码,但我认为问题不存在,我认为问题在于cron
本身:
import time, sys, os, datetime
try:
sys.path.append("../")
from Flabstraction.Flabstraction import Pysqlalchemy, MailingService
from Flabstraction.constants import constants_dict
from AutomatedReports import schedules_builder_2
from DeleteOldJobs import delete_old_jobs
from StallTimers import stop_timers
constants_selector = "LOCAL"
path_to_self = os.getcwd()
selectors = ['/au/','/eu/','/nz/','/usa/','/demo/']
for sel in selectors:
if sel in path_to_self:
constants_selector = sel.upper().replace('/', '')
break
constants = constants_dict[constants_selector]
mail = MailingService()
sql = Pysqlalchemy('mysql+pymysql', constants["mysql_user"], constants["mysql_pw"], constants["server_ip"], constants["mysql_db"])
file = open('./background.log', 'w')
file.write(str(datetime.datetime.now()) + " : running started. \n")
if constants_selector in ["LOCAL", "US"]:
schedules_builder_2(sql, constants["timezone"], constants_selector, mail, constants)
if constants_selector in ["AU", "NZ", "EU"]:
delete_old_jobs(sql, constants["timezone"])
stop_timers(sql, constants["timezone"], constants["_instance_id"])
file.write(str(datetime.datetime.now()) + " : running stapped. \n")
file.close()
except Exception as e:
file = open("./background.log", "w")
file.write("Error occured: ", e)
file.close()
raise e
Any idea how to resolve this and why it would be happening on only 1 of 3 servers?知道如何解决这个问题以及为什么它只会在 3 台服务器中的 1 台上发生吗?
Okay so I went through my code in detail and found that one of my constructors was giving a warning message which essentially froze the script while waiting for an input by the user.好的,所以我详细检查了我的代码,发现我的一个构造函数正在发出一条警告消息,该消息实际上在等待用户输入时冻结了脚本。
The way I resolved this was by using the getcwd()
function imported from os
in order to check which folder is calling the python script where this happened.我解决这个问题的方法是使用从
os
导入的getcwd()
function 来检查哪个文件夹正在调用发生这种情况的 python 脚本。
Having:有:
├── BackgroundTasks
│ ├── AutomatedReports.py
│ ├── DeleteOldJobs.py
│ ├── StallProjects.py
│ ├── StallTimers.py
│ ├── activate.sh
│ ├── background.log
│ └── run.py
├── Config
│ ├── Config.py
│ ├── common.py
│ ├── connect.py
│ ├── constants.py
├── app.py
I was executing run.py
from my activate.sh
file.我正在从我的
activate.sh
文件中执行run.py
When I run the application I call the app.py
.当我运行应用程序时,我调用
app.py
。
Using the getcwd()
function, I was able to check if I call the script from BackgroundTasks
and therefore add a case in my connect.py
script to ignore the user input when invoked from that folder.使用
getcwd()
function,我能够检查我是否从BackgroundTasks
调用脚本,因此在我的connect.py
脚本中添加一个案例,以在从该文件夹调用时忽略用户输入。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.