[英]Python script stops, no errors giving
I have an python script that needs to be running all the time. 我有一个需要一直运行的python脚本。 Sometimes it can run for a hole day, sometimes it only runs for like an hour.
有时它可以运行一个整日,有时只能运行一个小时。
import RPi.GPIO as GPIO
import fdb
import re
con = fdb.connect(dsn='10.100.2.213/3050:/home/trainee2/Desktop/sms', user='sysdba', password='trainee') #connect to database
cur = con.cursor() #initialize cursor
pinnen = [21,20,25,24,23,18,26,19,13,6,27,17] #these are the GPIO pins we use, they are the same on all PI's! We need them in this sequence.
status = [0] * 12 #this is an empty array were we'll safe the status of each pin
ids = []
controlepin = [2] * 12 #this array will be the same as the status array, only one step behind, we have this array so we can know where a difference is made so we can send it
GPIO.setmode(GPIO.BCM) #Initialize GPIO
getPersonIDs() #get the ids we need
for p in range(0,12):
GPIO.setup(pinnen[p],GPIO.IN) #setup all the pins to read out data
while True: #this will repeat endlessly
for e in range(0,12):
if ids[e]: #if there is a value in the ids (this is only neccesary for PI 3 when there are not enough users
status[e] = GPIO.input(pinnen[e]) #get the status of the GPIO. 0 is dark, 1 is light
if (status[e] != controlepin[e]): #if there are changes
id = ids[e]
if id != '': #if the id is not empty
if status[e] == 1: #if there is no cell phone present
cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,0)",(id)) #SEND 0, carefull! Status 0 sends 1, status 1 sends 0 to let it make sense in the database!!
else :
cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,1)",(id))
con.commit() #commit your query
controlepin[e] = status[e] #safe the changes so we woulnd't spam our database
time.sleep(1) #sleep for one second, otherwise script will crash cause of while true
def getPersonIDs(): #here we get the IDS
cur.execute("SELECT first 12 A.F_US_ID FROM T_RACK_SLOTS a order by F_RS_ID;") #here is where the code changes for each pi
for (ID) in cur:
ids.append(ID) #append all the ids to the array
The script is used for a cellphone rack, through LDR's I can see if a cellphone is present, then I send that data to a Firebird database. 该脚本用于手机机架,通过LDR,我可以查看是否存在手机,然后将数据发送到Firebird数据库。 The scripts are running om my Raspberry PI's.
这些脚本在我的Raspberry PI上运行。
Can it be that that the script just stops if the connection is lost for a few seconds? 如果连接断开几秒钟,脚本是否会停止? Is there a way to make sure they query's are always send?
有没有办法确保他们的查询总是发送?
Can it be that that the script just stops if the connection is lost for a few seconds?
如果连接断开几秒钟,脚本是否会停止?
More so, the script IS stopping for every Firebird command, including con.commit()
and it only continues when Firebird processes the command/query. 更重要的是,脚本会针对每个Firebird命令(包括
con.commit()
停止运行,并且仅在Firebird处理命令/查询时才会继续执行。
So, not knowing much of Python libraries I would still give you some advices. 因此,尽管对Python库了解不多,但我仍然会为您提供一些建议。
1) use parameters and prepared
queries as much as you can. 1)尽可能使用参数和
prepared
查询。
if status[e] == 1: #if there is no cell phone present
cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,0)",(id)) #SEND 0, carefull! Status 0 sends 1, status 1 sends 0 to let it make sense in the database!!
else :
cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,1)",(id))
That is not the best idea. 那不是最好的主意。 You force Firebird engine to parse the query text and build the query again and again.
您强制Firebird引擎解析查询文本并一次又一次地构建查询。 Waste of time and resources.
浪费时间和资源。
The correct approach is to make INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (?,?)
query, then prepare
it, and then run the already prepared query changing the parameters. 正确的方法是进行
INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (?,?)
查询,然后进行prepare
,然后运行已准备好的查询以更改参数。 You would only prepare it once, before the loop, and then would run it many times. 在循环之前,您只需准备一次,然后运行多次。
Granted, I do not know how to prepare
queries in the Python library, but I think you'd find the examples. 当然,我不知道如何在Python库中
prepare
查询,但是我想您会找到示例。
2) do not use SQL server for saving every single data element you get. 2)不要使用SQL Server保存您获得的每个数据元素。 It is a known mal-practice, that was suggested again decade ago.
这是一种已知的不当行为,十年前再次提出。 Especially with lazy versioned engine Interbase/Firebird is.
特别是使用懒版本引擎Interbase / Firebird的。
The thing is, with every your statement Firebird checks some internal statistics and sometimes it decides time came to do housekeeping. 事实是,Firebird在您的每条陈述中都会检查一些内部统计数据,有时它会决定进行家政服务的时间。
For example, your select
statement is akin for garbage collection
. 例如,您的
select
语句类似于garbage collection
。 Firebird might stop for scanning all the table, find the orphaned obsolete versions of rows and clear them away. Firebird可能会停止扫描所有表,找到孤立的行的过时版本并将其清除。 For example your
insert
statement is akin for index recreation: if Firebird would think the B-Tree of the index is got too one-sided, it would drop it, and build a new balanced tree, reading out the whole table ( and yes, reading the table may provoke GC on top of tree recreation ). 例如,您的
insert
语句类似于索引重新创建:如果Firebird认为索引的B树太单侧,则将其丢弃,然后构建一个新的平衡树,读出整个表(是的,读取表格可能会在树上娱乐方面引发GC()。
More so, let as steer away from Firebird specifics - what would you do ig Firebird crashes? 更重要的是,让我们远离Firebird的细节-如果Firebird崩溃,您会怎么做? Just crashes, it is a program, and like every program it may have bugs.
只是崩溃,它是一个程序,并且像每个程序一样,可能都有错误。 Or for example you run out of disk space and Firebird can no more insert anything into the database - where would your hardware sensors data end in then?
举例来说,您的磁盘空间不足,Firebird无法再将任何内容插入数据库-那么您的硬件传感器数据将在哪里结束? Won't it just be lost ?
会不会迷路?
http://www.translate.ru - this one works usually better than Google or Microsoft translation, especially if you set the vocabulary to computers. http://www.translate.ru-此语言通常比Google或Microsoft翻译更好,尤其是当您将词汇表设置为计算机时。
See #7 at http://www.ibase.ru/dontdoit/ - "Do not issue commit after every single row". 请参阅http://www.ibase.ru/dontdoit/上的 #7-“在每一行之后都不要提交提交”。 #24 at https://www.ibase.ru/45-ways-to-improve-firebird-performance-russian/ suggests committing packets of about a thousand rows as a sweet spot between to many transactions and too much uncommitted data.
https://www.ibase.ru/45-ways-to-improve-firebird-performance-russian/上的 #24建议将大约一千行的数据包作为许多事务和过多未提交数据之间的最佳结合点。 Also check #9, #10, #16 and #17 and #44 at the last link.
还要在最后一个链接上检查#9,#10,#16和#17和#44。
The overall structure of your software complex I believe has to be split into two services. 我相信您的软件组合的整体结构必须分为两个服务。
So, for example, you set the threshold of 10240 rows. 因此,例如,您将阈值设置为10240行。
flush
OS file buffers after every row, or every 100 rows, or something that would get best balance between reliability and performance. flush
OS文件缓冲区,或者在可靠性和性能之间取得最佳平衡的东西。 file handler
type variable in your program. file handler
更改一个file handler
类型变量即可。 The key points here is that the service should only do most simple and most fast operations. 这里的关键点是该服务应仅执行最简单,最快速的操作。 Since any random delay would mean your realtime-generated data is lost and would never be recovered.
由于任何随机延迟都将意味着您实时生成的数据将丢失,并且将永远无法恢复。 BTW, how can you disable Garbage Collection in Python, so it would not "stop the world" suddenly?
顺便说一句,如何在Python中禁用垃圾收集,这样它才不会突然“停止世界”? Okay, half-joking.
好吧,开玩笑。 But the point is kept.
但是重点仍然存在。 GC is random non-deterministic bogging down of your system, and it is badly compatible with regular non-buffered hardware data acquisition.
GC是系统的随机不确定性停滞状态,它与常规的非缓冲硬件数据采集严重兼容。 That primary acquisition of non-buffered data is better be done with most simple=predictable services, and GC is a good global optimization, but the price is it tends to generate sudden local no-service spikes.
使用大多数简单的可预测服务更好地完成非缓冲数据的主要采集,GC是很好的全局优化方法,但代价是它往往会导致突然的本地无服务高峰。
As this all happens with Service #1 with have another one. 由于这一切与服务#1一起发生,因此又有了另一个。
EXTERNAL TABLE
EXTERNAL TABLE
附加到Firebird数据库中 External Table
(detaching firebird from the file), then rename the "Phone_1_Inserting" file into "Phone_2_Done" (persisting its changed state) and then you delete it. External Table
(从文件中删除firebird),然后重命名“将“ Phone_1_Inserting”文件插入“ Phone_2_Done”(保持其更改状态),然后将其删除。 All in all, you should DECOUPLE your services. 总而言之,您应该减少服务质量。
https://en.wikipedia.org/wiki/Coupling_%28computer_programming%29 https://en.wikipedia.org/wiki/Coupling_%28computer_programming%29
The service who's main responsibility is to be with not a tiny pause ready to get and save data flow, and another service whose responsibility transfer the saved data into SQL database for ease of processing and it is not a big issue if it sometimes makes delays for few seconds as long as it does not lose data in the end. 该服务的主要职责是准备好获取和保存数据流的短暂停顿,而另一服务的职责是将已保存的数据传输到SQL数据库中以便于处理,并且有时会因延迟而延迟不大只要几秒钟,它最终不会丢失数据。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.