简体   繁体   English

在一个父脚本中从多个python脚本收集变量

[英]Collecting variables from multiple python scripts in one parent script

I try to write a python-parent-script, that collects data from 4 child-scripts. 我尝试编写一个python-parent-script,该脚本从4个子脚本中收集数据。 What I got: 我得到了:

  • Every child-script reads data from a different sensor and they have to be read continiously. 每个子脚本都从不同的传感器读取数据,因此必须连续读取它们。 So what I do right now is read them in while True loops. 因此,我现在要做的是在True循环中阅读它们。

  • Different sensors have different response times, so one child-script is reading with a, lets say, once a second rate while another will read 100 times faster. 不同的传感器具有不同的响应时间,因此一个子脚本的读取速度为每秒一次,而另一个则为100倍。

My goal/my struggle: 我的目标/奋斗目标:

  • Collect all the child-generated data in one script, in 4 different variables 用4个不同的变量在一个脚本中收集所有由子代生成的数据

What I acheved yet: 我的目的是:

  • Child-scripts are doing there work fine and reading the data with no issues 子脚本在这里可以正常工作,并且可以毫无问题地读取数据

  • I could start all 4 child-scripts from terminal as subprocesses but no idea how to collect ther generated data 我可以从终端将所有4个子脚本作为子进程启动,但不知道如何收集其生成的数据

  • Pass data between scripts but never from two scripts at the same time and way to slow, since the 'from script import variable' is as fast as the reading of the sensor. 在脚本之间传递数据,但不要同时从两个脚本传递数据,因此不会减慢速度,因为“从脚本导入变量”的速度与读取传感器的速度一样快。

Later plans are sending those 4 variables via Bluetooth to my phone, whitch I successed allready with only one sensor. 后来的计划是通过蓝牙将这4个变量发送到我的手机上,我只用一个传感器就成功地准备好了。

Since I am quite new to the whole Raspberry/Python community I would firstly say sorry for unspecific explenation. 由于我对整个Raspberry / Python社区还很陌生,因此我首先对不确定的表示遗憾。 Please feel free to ask for further informations or suggest to solve things differently. 请随时询问更多信息或建议以其他方式解决问题。 And secondly I would appreceate it a lot if you could help me with code-snippets if you like, because again I am quite new and that helps me way more then links to librarys or documenturies that create more questions than answearing them. 其次,如果您愿意的话可以帮助我提供代码片段,我会非常感激,因为我又是新手,而且可以帮助我更多地链接到库或文献库,而这些库或文献库会产生更多的问题,而不是抛弃它们。

Thank you a lot in advance 提前多谢

The easiest way to accomplish this would be using threads, available through the threading class . 实现此目的最简单的方法是使用可通过threading类获得的线程 Collecting data would be as simple as writing to a shared variable (with thread safety of course). 收集数据就像写入共享变量一样简单(当然具有线程安全性)。

import threading
import time

data1 = []
lock1 = threading.Lock() # Use a lock, mutex, or semaphore to ensure thread safety

def foo():
    while True:
        lock1.acquire()
        data1.append(bar)
        lock1.release()
        time.sleep(10)

 foo_thread = threading.thread(target=foo)
 foo_thread.start()

 while True:
     lock1.acquire()
     # If all values don't need to be held in memory the parent
     # script can implement some kind of queue and delete values 
     # after processing
     do_stuff_with(data)
     lock1.release()
     # Queuing also allows the parent script to run at a much slower rate
     time.sleep(100)

The example above is a very quick demonstration of threading. 上面的示例非常快速地演示了线程。 There are better data types to implement queuing in, such as collection.deque . 有更好的数据类型可以实现排队,例如collection.deque

Threading in python does have some caveats though, namely python runs scripts in one single process and there is a Global Interpreter Lock, meaning the threading module provides concurrency, but not parallelism. python中的线程确实有一些警告,即python在单个进程中运行脚本,并且有一个全局解释器锁,这意味着线程模块提供了并发性,但没有并行性。 For only 4 data collections it probably would cause any issues, but if better performance is needed multiprocessing provides tools for spawning multiple python processes allowing for true parallelism. 对于仅4个数据收集,它可能会引起任何问题,但是如果需要更好的性能,则多处理可提供用于生成多个python进程的工具,从而实现真正的并行性。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM