简体   繁体   中英

Having 'Critical Resource' in separate python script to prevent race condition

I am very new to multiprocessing and want to create a python script such that anyone can SSH to my Rpi and play with GPIOs but only condition is that when a function is being accessed by one user then other user must wait for 'x' seconds(till function has finished executing) to enable synchronization. To test this I have created two test files on my pc which can hopefully provide you with better idea:- File 1

def main1(input2, input1, num, val, lock):
    with lock:
        print(input2)
        print(input1)
        time.sleep(int(input1))
        val.value = val.value + 1
           
def main3(input2, input1, d, val, lock):

    t1 = multiprocessing.Process(target=main1, args=(input2, input1, d, val, lock, ))
    t1.start()
    t1.join()
    print(val.value)

File 2:-

if __name__ == '__main__':   

    

    lock = multiprocessing.Lock()

    val = multiprocessing.Value('i', int(1))

    while True:

        input3 = input('enter on')

        if input3 == 'on':
           
            
            relno = int(input('enter relay to turn on [1-7]: '))
            d = 0
            test.main3(input3, relno, d, val, lock)
        
        elif input3 == 'off':
            relno = int(input('enter relay to turn on [1-7]: '))
            d = 0
            test.main3(input3, relno, d, val, lock)
            
            
        else:
            print("not working")
            break

        print(val.value)

I am not getting any errors with any of my files. Only issue is that when I issue commands parallelly using two terminals, my critical resource is not secured and being accessed by both processes (different PIDs) simultaneously.

I hope you probably got an idea of what I am trying to achieve and any suggestions are helpful. Thanks.

If I understand correctly, you are calling python file2.py in two separate terminals? In this case, you have totally separate instances of a main process (file2.py), each with their own subprocess (file1.main1), each with a separate mp.Value() . There is no way for your processes to know about each other or know about the other "shared" value. Those types of shared values can only be shared with child processes. If there's no relationship between the two processes you must use another mechanism to share information. There are a couple ways to do that, but they all boil down to the OS managing some resource which is common to all processes.

First of all the filesystem is common to all processes, so you could use something like filelock to control access to the relays. The filesystem is also behind multiprocessing.shared_memory which can be given a fixed filename, allowing communication across unrelated processes (although it doesn't provide a similar easy analog to Lock , but it can be used quite easily for data transfer).

Secondly, you can host a server on a fixed port which controls access to the relays, and the "clients" simply connect to that port (which can be open only to localhost, or you could even allow external connections to avoid the need for SSH). This way only a single process has control over your "critical resource".

Thirdly, because you mention RPI, you could install posix_ipc , which allows you to create named Locks, Queues, and shared memory regions. This is very similar to the built-in multiprocessing.shared_memory in that you can refer to the same lock by name in separate files, but adds OS native locks and queues (not regular python queues, they can only send strings).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM