简体   繁体   English

多处理:没有来自第三个进程的 output

[英]Multiprocessing: no output from third process

Context: I have 3 files: 2 of which collect data every 10 seconds and one of which should be collecting this data for machine learning.上下文:我有 3 个文件:其中 2 个每 10 秒收集一次数据,其中一个应该收集这些数据用于机器学习。 I've set up multiprocessing where I have a function for each file that contains making an instance of its class and calls the necessary functions, then in the main making 3 processes, starting, and joining them where the third process only starts and joins every 10 seconds since that is when data comes in and when it is needed.我已经设置了多处理,其中每个文件都有一个 function 包含制作其 class 的实例并调用必要的函数,然后主要制作 3 个进程,启动并加入它们,其中第三个进程仅启动并加入每个10 秒,因为那是数据进入和需要的时间。

import gui_and_keyboard_features
import brain_features
import machine_learning

import multiprocessing as mp
import sys
import time

# increase recursion limit
sys.setrecursionlimit(15000)

def first_file():
    gui1 = gui_and_keyboard_features.gui()
    gui1.realtime()
    gui1.every_5_min()
    gui1.main_window.mainloop()
 
def second_file():
    myBoard = brain_features.braindata(-1, 'COM3')
    myBoard.startStream()
    myBoard.collectData()
    # print(myBoard.compressed_brain_training_features)
 
def third_file():
    # machine learning related
    myml = machine_learning.ml()
    myml.add_raw_data()
    # myml.add_training_data()
    # myml.train_model()
    # myml.predict()
    # keyboard related
    ml_keyboard_data = gui_and_keyboard_features.gui()
    ml_keyboard_data.realtime()
    ml_keyboard_data.every_5_min()
    ml_keyboard_data.main_window.mainloop()
    # brain related
    ml_brain_data = brain_features.braindata()
    ml_brain_data.startStream()
    ml_brain_data.collectData()
 
if __name__ == "__main__":
    # add ml file code here

    start_time = time.time()

    proc1 = mp.Process(target=first_file)
    proc2 = mp.Process(target=second_file)
    proc3 = mp.Process(target=third_file)

    proc1.start()
    proc2.start()

    proc1.join()
    proc2.join()

    print("we are here")

    while True:
        if (int(time.time() - start_time) % 10 == 0.0) and (int(time.time() - start_time) != 0.0):
            proc3.start()
            proc3.join()

    print("finished running")

Problem: when running the multiprocessing file, I only get output from the 2 data files and nothing from the 3rd machine learning file.问题:运行多处理文件时,我只从 2 个数据文件中得到 output,而从第 3 个机器学习文件中什么也没有。 There are no while loops contained in the 2 data files, but one file is connected to a gui made with tkinter where a.after() function that takes a time interval and function and continuously reruns the function after that time interval. There are no while loops contained in the 2 data files, but one file is connected to a gui made with tkinter where a.after() function that takes a time interval and function and continuously reruns the function after that time interval. I have set up print statements where these.after() functions occur in the first data file as well as in the second data and machine learning files.我已经设置了打印语句,其中这些.after() 函数出现在第一个数据文件以及第二个数据和机器学习文件中。 When this is run through the multiprocessing file, it correctly loops through these data files but it never reaches the print statement contained in the machine learning file.当它通过多处理文件运行时,它会正确循环这些数据文件,但它永远不会到达机器学习文件中包含的打印语句。

self.main_window.after(9500, self.realtime)
self.main_window.after(10000, self.every_5_min)
every 10s keyboard features
every 5 min keyboard features
brain features
every 10s keyboard features
every 5 min keyboard features
brain features
every 10s keyboard features
every 5 min keyboard features
brain features
every 10s keyboard features
every 5 min keyboard features
brain features

The default scenario of multiprocessing should be list of start processes then list of joining them if you start and join alternatively you will encounter bugs.多处理的默认方案应该是启动进程列表,然后是加入它们的列表,如果你启动并加入,你会遇到错误。 Therefore the scenario where you want a child process to doo something every 10 seconds you need to use Pool from multiprocessing.因此,如果您希望子进程每 10 秒执行一次操作,则需要从多处理中使用 Pool。 So from what I understood of your explanations you will need to set each child process as a worker then you will use the workers 1 and 2 similarly with what you did there but the 3rd worked should be put asleep while its not needed.因此,根据我对您的解释的理解,您需要将每个子进程设置为工作者,然后您将使用工作者 1 和 2 与您在那里所做的类似,但第 3 个工作应该在不需要时进入休眠状态。

Actually your code even stops before reaching your while loop .实际上,您的代码甚至在到达您的while loop之前就停止了。 Or is "we are here" ever printed?或者"we are here"是否曾经印刷过? Using join on process 1 and 2 is waiting until these processes are terminated.在进程 1 和 2 上使用join将等待这些进程终止。 But because you have infinite processes 1 and 2 your script will never continue.但是因为你有无限的进程 1 和 2 你的脚本永远不会继续。 Try to run it without the join statements.尝试在没有join语句的情况下运行它。

Also you might want to implement proc1.daemon = True before you call proc1.start() (same for proc2) to ensure the subprocesses are killed once the main process stops.此外,您可能希望在调用proc1.start() (与 proc2 相同)之前实现proc1.daemon = True以确保在主进程停止后子进程被终止。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM