简体   繁体   English

构建一种等待其他 python 脚本输入的自运行队列脚本

[英]Building sone kind of self running que-script waiting for other python scripts inputs

I have a problem, which from my perspective is some kind of special.我有一个问题,从我的角度来看,这是某种特殊的问题。

Iam running a system (which is not changeable) which runs the same Python script 10-100 times simultanousley.我正在运行一个系统(不可更改),该系统同时运行相同的 Python 脚本 10-100 次。 Not all the time, but when it does it, than all at once.不是所有的时间,但当它这样做时,比一次。

Actually this script, which is executes x times at the exact same moment (or just with a delay of milliseconds) needs to ask a Web API for certain data.实际上,在完全相同的时刻(或仅延迟几毫秒)执行 x 次的此脚本需要向 Web API 询问某些数据。 This Web API cant handle that much requests at once (which I cant change either, nor can I modify this API in any way).这个 Web API 不能一次处理那么多请求(我也不能改变,也不能以任何方式修改这个 API)。

So what I would like to build, is some kind of seperate python script which runs all the time and is waiting for input from all those other scripts.所以我想要构建的是某种单独的 python 脚本,它一直运行并等待所有其他脚本的输入。 This seperate script should recieve the request payload for the API, than creates a que and gets all that data.这个单独的脚本应该接收 API 的请求负载,然后创建一个队列并获取所有数据。 After this, is gives back the data to the python script asked for the data.在此之后,将数据返回给请求数据的 python 脚本。

Is this somehow possible?这有可能吗? Can someone even understand my problem?有人甚至可以理解我的问题吗? Sorry for my complicated description:D对不起我的复杂描述:D

Actually I solved this problem with an RNG in that one Script that is executed multiple times, before those scripts perform the API request, they pause for rng(x) milliseconds, so they arent execute the request all at once - but this solution is not really failproof.实际上,我在一个多次执行的脚本中解决了这个问题,在这些脚本执行 API 请求之前,它们会暂停 rng(x) 毫秒,因此它们不会一次执行所有请求 - 但这个解决方案不是真的万无一失。

Maybe there is a better solution for my problem, than my first idea.对于我的问题,也许有比我的第一个想法更好的解决方案。

Thanks for your help!谢谢你的帮助!

fcntl.flock - how to implement a timeout? fcntl.flock - 如何实现超时?

This command executes 5 instances of a python script as fast as possible then the wait command waits for all background processes to finish.此命令尽可能快地执行 python 脚本的 5 个实例,然后等待命令等待所有后台进程完成。

for ((i=0;i<5;i++)) ; do ./same-lock.py &  done ; wait
[1] 66023
[2] 66024
[3] 66025
[4] 66026
[5] 66027
66025
66027
66024
66026
66023
[1]   Done                    ./same-lock.py
[2]   Done                    ./same-lock.py
[3]   Done                    ./same-lock.py
[4]-  Done                    ./same-lock.py
[5]+  Done                    ./same-lock.py

The python code below ensures that only one of those scripts runs at a time.下面的 python 代码确保一次只运行其中一个脚本。

#!/usr/local/bin/python3

# same-lock.py

import os
from random import randint
from time import sleep
import signal, errno
from contextlib import contextmanager
import fcntl

lock_file = '/tmp/same.lock_file'

@contextmanager
def timeout(seconds):
    def timeout_handler(signum, frame):
        pass

    original_handler = signal.signal(signal.SIGALRM, timeout_handler)

    try:
        signal.alarm(seconds)
        yield
    finally:
        signal.alarm(0)
        signal.signal(signal.SIGALRM, original_handler)


# wait up to 600 seconds for a lock
with timeout(600):
    f = open(lock_file, "w")
    try:
        fcntl.flock(f.fileno(), fcntl.LOCK_EX)
        # Print the process ID of the current process
        pid = os.getpid()
        print(pid)
        # Sleep a random number of seconds (between 1 and 5)
        sleep(randint(1,5))
        fcntl.flock(f.fileno(), fcntl.LOCK_UN)
    except IOError as e:
        if e.errno != errno.EINTR:
            raise e
        print( "Lock timed out")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM