简体   繁体   中英

Building sone kind of self running que-script waiting for other python scripts inputs

I have a problem, which from my perspective is some kind of special.

Iam running a system (which is not changeable) which runs the same Python script 10-100 times simultanousley. Not all the time, but when it does it, than all at once.

Actually this script, which is executes x times at the exact same moment (or just with a delay of milliseconds) needs to ask a Web API for certain data. This Web API cant handle that much requests at once (which I cant change either, nor can I modify this API in any way).

So what I would like to build, is some kind of seperate python script which runs all the time and is waiting for input from all those other scripts. This seperate script should recieve the request payload for the API, than creates a que and gets all that data. After this, is gives back the data to the python script asked for the data.

Is this somehow possible? Can someone even understand my problem? Sorry for my complicated description:D

Actually I solved this problem with an RNG in that one Script that is executed multiple times, before those scripts perform the API request, they pause for rng(x) milliseconds, so they arent execute the request all at once - but this solution is not really failproof.

Maybe there is a better solution for my problem, than my first idea.

Thanks for your help!

fcntl.flock - how to implement a timeout?

This command executes 5 instances of a python script as fast as possible then the wait command waits for all background processes to finish.

for ((i=0;i<5;i++)) ; do ./same-lock.py &  done ; wait
[1] 66023
[2] 66024
[3] 66025
[4] 66026
[5] 66027
66025
66027
66024
66026
66023
[1]   Done                    ./same-lock.py
[2]   Done                    ./same-lock.py
[3]   Done                    ./same-lock.py
[4]-  Done                    ./same-lock.py
[5]+  Done                    ./same-lock.py

The python code below ensures that only one of those scripts runs at a time.

#!/usr/local/bin/python3

# same-lock.py

import os
from random import randint
from time import sleep
import signal, errno
from contextlib import contextmanager
import fcntl

lock_file = '/tmp/same.lock_file'

@contextmanager
def timeout(seconds):
    def timeout_handler(signum, frame):
        pass

    original_handler = signal.signal(signal.SIGALRM, timeout_handler)

    try:
        signal.alarm(seconds)
        yield
    finally:
        signal.alarm(0)
        signal.signal(signal.SIGALRM, original_handler)


# wait up to 600 seconds for a lock
with timeout(600):
    f = open(lock_file, "w")
    try:
        fcntl.flock(f.fileno(), fcntl.LOCK_EX)
        # Print the process ID of the current process
        pid = os.getpid()
        print(pid)
        # Sleep a random number of seconds (between 1 and 5)
        sleep(randint(1,5))
        fcntl.flock(f.fileno(), fcntl.LOCK_UN)
    except IOError as e:
        if e.errno != errno.EINTR:
            raise e
        print( "Lock timed out")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM