简体   繁体   English

如何在多处理中使用共享内存的概念

[英]How to use shared memory concept in Multiprocessing

I want to start 2 processes simultaneously using a shared variable. 我想使用共享变量同时启动2个进程。 One will start processing immediately and the other will wait for the trigger (and the shared variable) from the first process , for it to start processing. 一个将立即开始处理,另一个将等待第一个进程的触发器(和共享变量),以使其开始处理。

My first process, calculates the distance, and the 2nd process, acts differently based upon the distance traveled. 我的第一个过程计算距离,第二个过程根据行进的距离采取不同的行动。 Distance is passed as an argument and current_conveyer is the shared memory variable. Distance作为参数传递, current_conveyer是共享内存变量。

Following is my code:- 以下是我的代码:

def process1():

    current_conveyer = Value('d', 'SC1')   # also I want to know how to initialize the string values. Current it is double precision float.

    while condition:
        conveyer_type = current_conveyer.value
        S = pickle.load(open('conveyer_speed.p','rb'))[conveyer_type]
        D = S * T # speed is changing, hence calculating the speed at every instant.
        # trigger the second process. NOT create a new process
        time.sleep(0.005)

def process2(current_converyer,distance):
    while True:
        if some condition:
              current_converyer = 'SC2'
        elif some condition:
              current_converyer = 'SC3'

As of right now, I'm starting a new process for every while loop. 截至目前,我正在为每个while循环启动一个新过程。

I want to create a single process for all of this which will be listening and sharing the variable. 我想为所有这些创建一个进程,该进程将监听并共享变量。 If any trigger is sent, that process should listen, wake up, and work instead of creating a completely new process. 如果发送了任何触发器,则该进程应侦听,唤醒并工作,而不是创建一个全新的进程。

I've know this can be done via queues and pipes, but then using queues and pipes will defeat the purpose of shared memory. 我知道这可以通过队列和管道完成,但是使用队列和管道将无法达到共享内存的目的。

I've tried implementing the above code alone with both queues and pipes, there were some time efficiency issues, hence now want to try out the shared memory variable method. 我尝试使用队列和管道单独实现上述代码,但存在一些时间效率问题,因此现在想尝试使用共享内存变量方法。

So given the above, I am wondering how do I keep the process listening and implement the shared memory concept as well. 因此,鉴于以上所述,我想知道如何保持进程的侦听并实现共享内存的概念。

The simplest way to do multiprocessing is having a Pool to execute several, larger jobs, each of which returns data to the parent process. 执行多处理的最简单方法是让Pool执行多个更大的作业,每个作业都将数据返回给父进程。

If you really want to have a 'shared' variable, create a Manager object, and use it to create managed (shared) objects. 如果您确实想要一个“共享”变量,请创建一个Manager对象,然后使用它来创建托管(共享)对象。 In this way multiple procs can read and write different values, and the values are communicated to the different children processes. 这样,多个proc可以读取和写入不同的值,并将这些值传送到不同的子进程。

The following code has simple "main" code that starts two children then waits for them. 以下代码具有简单的“主要”代码,该代码启动两个孩子,然后等待它们。 One child wakes up, appends an item to a managed (shared) list, then waits. 一个孩子醒来,将一个项目添加到托管(共享)列表中,然后等待。 The second child waits for a while, consumes and prints the shared list, then sleeps. 第二个孩子等待一会儿,消费并打印共享列表,然后入睡。

source 资源

import multiprocessing, signal, time

def producer(objlist):
    '''
    add an item to list every sec
    '''
    while True:
        try:
            time.sleep(1)
        except KeyboardInterrupt:
            return
        msg = 'ding: {:04d}'.format(int(time.time()) % 10000)
        objlist.append( msg )
        print msg


def scanner(objlist):
    '''
    every now and then, consume objlist & run calculation
    '''
    while True:
        try:
            time.sleep(3)
        except KeyboardInterrupt:
            return
        print 'items: {}'.format( list(objlist) )
        objlist[:] = []


def main():

    # create obj sharable between all processes
    manager = multiprocessing.Manager()
    my_objlist = manager.list() # pylint: disable=E1101

    multiprocessing.Process(
        target=producer, args=(my_objlist,),
    ).start()

    multiprocessing.Process(
        target=scanner, args=(my_objlist,),
    ).start()

    # kill everything after a few seconds
    signal.signal(
        signal.SIGALRM, 
        lambda _sig,_frame: manager.shutdown(),
        )
    signal.alarm(12)

    try:
        manager.join() # wait until both workers die
    except KeyboardInterrupt:
        pass


if __name__=='__main__':
    main()

output 产量

ding: 8392
ding: 8393
ding: 8394
ding: 8395
ding: 8396
ding: 8397

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM