简体   繁体   English

检查脚本是否已经运行(python / linux)

[英]Checking if script is already running (python / linux)

I am trying to add a function to a script to check if it is already running.我正在尝试将 function 添加到脚本中以检查它是否已在运行。 This is because the script will be started with a cronjob.这是因为脚本将使用 cronjob 启动。

Here a stub of what i attempted for that function:这是我为 function 尝试的内容的存根:

import psutil
import sys
import time


print(__file__)


def check_if_running():
    # print('process_nb: ', len(list(psutil.process_iter())))
    for i, q in enumerate(psutil.process_iter()):
        n = q.name() 
        # print(i, n)
        if 'python' in n:
            print(i, n)
            c = q.cmdline() 
            print(c)
            if __file__ in c:
                print('already running')
                sys.exit()
            else:
                print('not yet running')
                return 


if __name__ == '__main__':
    check_if_running()
    while True:
        time.sleep(3)

I run the script a first time, then a second in a separate shell.我第一次运行脚本,然后在单独的 shell 中运行第二次。 On the second time it should print 'already running' and exit, however it doesn't.第二次它应该打印'already running'并退出,但它没有。

Can anyone help me figure out why?谁能帮我弄清楚为什么?

As @JohnGordon noticed in the comments, there is a logic problem in your code.正如@JohnGordon 在评论中注意到的那样,您的代码中存在逻辑问题。

if __file__ in c:
    print('already running')
    sys.exit()
else:
    print('not yet running')
    return

Here, if it checks a process and it doesn't match the file, the function returns.在这里,如果它检查一个进程并且它与文件不匹配,则 function 返回。 That means it won't check any remaining processes.这意味着它不会检查任何剩余的进程。

You can only deduce that the program is not yet running after the loop has been allowed to complete.您只能在允许循环完成后推断程序尚未运行。

def check_if_running():
    # print('process_nb: ', len(list(psutil.process_iter())))
    for i, q in enumerate(psutil.process_iter()):
        n = q.name() 
        # print(i, n)
        if 'python' in n.lower():
            print(i, n)
            c = q.cmdline() 
            print(c)
            if __file__ in c:
                print('already running')
                sys.exit()
    # every process has been checked
    print('not yet running')

I also changed 'python' in n to 'python' in n.lower() , because on my system the process is called 'Python' , not 'python' , and this change should cover both cases.我还将'python' in n'python' in n.lower() ,因为在我的系统上,该进程称为'Python' ,而不是'python' ,并且此更改应涵盖这两种情况。

However, when I tried this I found another problem, which is that the program finds its own process and always shuts down, even if it's the only version of itself running.但是,当我尝试这样做时,我发现了另一个问题,即程序找到了自己的进程并总是关闭,即使它是它自己运行的唯一版本。

To avoid that, maybe you want to count the number of matching processes instead, and only exit if it finds more than one match.为避免这种情况,您可能想改为计算匹配进程的数量,并且仅在找到多个匹配项时才退出。

def count_processes(name, file):
    return sum(name in q.name().lower() and file in q.cmdline() for q in psutil.process_iter())

def check_if_running():
    if count_processes('python', __file__) > 1:
        print('already running')
        sys.exit()
    else:
        print('not yet running')

Here is a possible alternative - this wrapper based on Linux file locking can be added at the start of the command line in your cron job, and then no checks are needed inside your script itself.这是一个可能的替代方案 - 这个基于 Linux 文件锁定的包装器可以在您的 cron 作业的命令行开头添加,然后您的脚本本身不需要检查。

Then in the crontab, just use this command:然后在 crontab 中,只需使用以下命令:

/path/to/lock_wrapper --abort /path/to/lock_file your_command [your_command_args...] 

Ensure that the lockfile is on a local filesystem for proper file locking functionality.确保锁定文件位于本地文件系统上,以实现正确的文件锁定功能。 (Some types of shared filesystem do not work reliably with file locks.) (某些类型的共享文件系统不能可靠地使用文件锁。)

If the file is already locked, then it will abort.如果文件已经被锁定,那么它将中止。 Without --abort , it would wait instead.如果没有--abort ,它将等待。

#!/usr/bin/env python3

"""
   a wrapper to run a command with a lock file so that if multiple
   commands are invoked with the same lockfile, they will only run one
   at a time, i.e. when it's running it applies an exclusive lock to the
   lockfile, and if another process already has the exclusive lock then
   it has to wait for the other instance to release the lock before it
   starts to run, or optionally the second process will simply abort

   can be used for running instances of commands that are
   resource-intensive or will in some other way conflict with each
   other
"""

import sys
import os
import fcntl
import subprocess
from argparse import ArgumentParser


def parse_args():
    parser = ArgumentParser(__doc__)
    parser.add_argument(
        "-a", "--abort",
        action="store_true",
        help="abort if the lockfile is already locked (rather than waiting)")
    parser.add_argument("lockfile",
                        help=("path name of lockfile "
                              "(will be created if it does not exist)"))
    parser.add_argument("command",
                        nargs="*",
                        help="command (with any arguments)")
    return parser.parse_args()


def ensure_exists(filename):
    if not os.path.exists(filename):
        with open(filename, "w"):
            pass


def lock(fh, wait=True):
    if wait:
        fcntl.flock(fh, fcntl.LOCK_EX)
    else:
        try:
            fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
        except IOError:
            sys.exit(1)


def unlock(fh):
    fcntl.flock(fh, fcntl.LOCK_UN)    


args = parse_args()
ensure_exists(args.lockfile)
with open(args.lockfile) as fh:
    lock(fh, wait=not args.abort)    
    with subprocess.Popen(args.command) as proc:
        return_code = proc.wait()
    unlock(fh)
sys.exit(return_code)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM