简体   繁体   English

Python脚本停止运行,没有错误提示

[英]Python script stops, no errors giving

I have an python script that needs to be running all the time. 我有一个需要一直运行的python脚本。 Sometimes it can run for a hole day, sometimes it only runs for like an hour. 有时它可以运行一个整日,有时只能运行一个小时。

    import RPi.GPIO as GPIO
import fdb
import re

con = fdb.connect(dsn='10.100.2.213/3050:/home/trainee2/Desktop/sms', user='sysdba', password='trainee') #connect to database
cur = con.cursor() #initialize cursor 

pinnen = [21,20,25,24,23,18,26,19,13,6,27,17] #these are the GPIO pins we use,  they are the same on all PI's! We need them in this sequence.
status = [0] * 12 #this is an empty array were we'll safe the status of each pin
ids = []
controlepin = [2] * 12 #this array will be the same as the status array, only one step behind, we have this array so we can know where a difference is made so we can send it
GPIO.setmode(GPIO.BCM) #Initialize GPIO

getPersonIDs()  #get the ids we need


for p in range(0,12):
    GPIO.setup(pinnen[p],GPIO.IN) #setup all the pins to read out data

while True: #this will repeat endlessly
    for e in range(0,12):
        if ids[e]: #if there is a value in the ids (this is only neccesary for PI 3 when there are not enough users
            status[e] = GPIO.input(pinnen[e]) #get the status of the GPIO. 0 is dark, 1 is light
            if (status[e] != controlepin[e]): #if there are changes 
                id = ids[e]
                if id != '': #if the id is not empty
                    if status[e] == 1: #if there is no cell phone present
                        cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,0)",(id)) #SEND 0, carefull! Status 0 sends 1, status 1 sends 0 to let it make sense in the database!!
                    else :
                        cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,1)",(id))

                con.commit() #commit your query
                controlepin[e] = status[e]  #safe the changes so we woulnd't spam our database  
    time.sleep(1) #sleep for one second, otherwise script will crash cause of while true


def getPersonIDs(): #here we get the IDS 
    cur.execute("SELECT first 12 A.F_US_ID FROM T_RACK_SLOTS a order by F_RS_ID;") #here is where the code changes for each pi
    for (ID) in cur:
        ids.append(ID) #append all the ids to the array

The script is used for a cellphone rack, through LDR's I can see if a cellphone is present, then I send that data to a Firebird database. 该脚本用于手机机架,通过LDR,我可以查看是否存在手机,然后将数据发送到Firebird数据库。 The scripts are running om my Raspberry PI's. 这些脚本在我的Raspberry PI上运行。

Can it be that that the script just stops if the connection is lost for a few seconds? 如果连接断开几秒钟,脚本是否会停止? Is there a way to make sure they query's are always send? 有没有办法确保他们的查询总是发送?

Can it be that that the script just stops if the connection is lost for a few seconds? 如果连接断开几秒钟,脚本是否会停止?

More so, the script IS stopping for every Firebird command, including con.commit() and it only continues when Firebird processes the command/query. 更重要的是,脚本会针对每个Firebird命令(包括con.commit()停止运行,并且仅在Firebird处理命令/查询时才会继续执行。

So, not knowing much of Python libraries I would still give you some advices. 因此,尽管对Python库了解不多,但我仍然会为您提供一些建议。

1) use parameters and prepared queries as much as you can. 1)尽可能使用参数和prepared查询。

 if status[e] == 1: #if there is no cell phone present
                        cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,0)",(id)) #SEND 0, carefull! Status 0 sends 1, status 1 sends 0 to let it make sense in the database!!
                    else :
                        cur.execute("INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (? ,1)",(id))

That is not the best idea. 那不是最好的主意。 You force Firebird engine to parse the query text and build the query again and again. 您强制Firebird引擎解析查询文本并一次又一次地构建查询。 Waste of time and resources. 浪费时间和资源。

The correct approach is to make INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (?,?) query, then prepare it, and then run the already prepared query changing the parameters. 正确的方法是进行INSERT INTO T_ENTRIES (F_US_ID, F_EN_STATE) values (?,?)查询,然后进行prepare ,然后运行已准备好的查询以更改参数。 You would only prepare it once, before the loop, and then would run it many times. 在循环之前,您只需准备一次,然后运行多次。

Granted, I do not know how to prepare queries in the Python library, but I think you'd find the examples. 当然,我不知道如何在Python库中prepare查询,但是我想您会找到示例。

2) do not use SQL server for saving every single data element you get. 2)不要使用SQL Server保存您获得的每个数据元素。 It is a known mal-practice, that was suggested again decade ago. 这是一种已知的不当行为,十年前再次提出。 Especially with lazy versioned engine Interbase/Firebird is. 特别是使用懒版本引擎Interbase / Firebird的。

The thing is, with every your statement Firebird checks some internal statistics and sometimes it decides time came to do housekeeping. 事实是,Firebird在您的每条陈述中都会检查一些内部统计数据,有时它会决定进行家政服务的时间。

For example, your select statement is akin for garbage collection . 例如,您的select语句类似于garbage collection Firebird might stop for scanning all the table, find the orphaned obsolete versions of rows and clear them away. Firebird可能会停止扫描所有表,找到孤立的行的过时版本并将其清除。 For example your insert statement is akin for index recreation: if Firebird would think the B-Tree of the index is got too one-sided, it would drop it, and build a new balanced tree, reading out the whole table ( and yes, reading the table may provoke GC on top of tree recreation ). 例如,您的insert语句类似于索引重新创建:如果Firebird认为索引的B树太单侧,则将其丢弃,然后构建一个新的平衡树,读出整个表(是的,读取表格可能会在树上娱乐方面引发GC()。

More so, let as steer away from Firebird specifics - what would you do ig Firebird crashes? 更重要的是,让我们远离Firebird的细节-如果Firebird崩溃,您会怎么做? Just crashes, it is a program, and like every program it may have bugs. 只是崩溃,它是一个程序,并且像每个程序一样,可能都有错误。 Or for example you run out of disk space and Firebird can no more insert anything into the database - where would your hardware sensors data end in then? 举例来说,您的磁盘空间不足,Firebird无法再将任何内容插入数据库-那么您的硬件传感器数据将在哪里结束? Won't it just be lost ? 会不会迷路?

http://www.translate.ru - this one works usually better than Google or Microsoft translation, especially if you set the vocabulary to computers. http://www.translate.ru-此语言通常比Google或Microsoft翻译更好,尤其是当您将词汇表设置为计算机时。

See #7 at http://www.ibase.ru/dontdoit/ - "Do not issue commit after every single row". 请参阅http://www.ibase.ru/dontdoit/上的 #7-“在每一行之后都不要提交提交”。 #24 at https://www.ibase.ru/45-ways-to-improve-firebird-performance-russian/ suggests committing packets of about a thousand rows as a sweet spot between to many transactions and too much uncommitted data. https://www.ibase.ru/45-ways-to-improve-firebird-performance-russian/上的 #24建议将大约一千行的数据包作为许多事务和过多未提交数据之间的最佳结合点。 Also check #9, #10, #16 and #17 and #44 at the last link. 还要在最后一个链接上检查#9,#10,#16和#17和#44。

The overall structure of your software complex I believe has to be split into two services. 我相信您的软件组合的整体结构必须分为两个服务。

  1. Query data from hardware sensors and save it to plain stupid binary flat file. 从硬件传感器查询数据并将其保存到普通的愚蠢二进制平面文件中。 Since this file is the most simplistic format that can be - the performance and reliability would be maxed. 由于此文件是最简单的格式,因此可以最大程度地提高性能和可靠性。
  2. Take ready binary files, and insert them into SQL database in bulk insert mode. 准备好二进制文件,然后以批量插入模式将它们插入SQL数据库。

So, for example, you set the threshold of 10240 rows. 因此,例如,您将阈值设置为10240行。

  1. The service #1 creates the file "Phones_1_Processing" with BINARY well-defined format. 服务#1使用定义良好的BINARY格式创建文件“ Phones_1_Processing”。 It also creates and opens "Phones_2_Processing" file, but keeps it at 0 length. 它还创建并打开“ Phones_2_Processing”文件,但文件长度保持为0。 Then it keeps adding rows into the "Phones_1_Processing" for a while. 然后,它会持续将行添加到“ Phones_1_Processing”中一段时间​​。 It might also flush OS file buffers after every row, or every 100 rows, or something that would get best balance between reliability and performance. 它也可能在每行或每100行之后flush OS文件缓冲区,或者在可靠性和性能之间取得最佳平衡的东西。
  2. When the threshold is met, the service #1 switches into recording incoming data cells into the already created and opened "Phones_2_Processing" file. 当达到阈值时,服务#1切换为将传入的数据单元记录到已创建并打开的“ Phones_2_Processing”文件中。 It can be done instantly, change one file handler type variable in your program. 它可以立即完成,只需在file handler更改一个file handler类型变量即可。
  3. Then the service #1 closes and renames "Phones_1_Processing" into ""Phones_1_Complete". 然后,服务#1关闭,并将“ Phones_1_Processing”重命名为“ Phones_1_Complete”。
  4. Then the service #1 creates new empty file "Phones_3_Processing" and keeps it open with zero length. 然后,服务#1创建一个新的空文件“ Phones_3_Processing”,并以零长度保持打开状态。 Now it is back at state "1" - ready to instantly switch its recording into new file, when the current file is over. 现在它回到状态“ 1”-准备在当前文件结束后立即将其记录切换到新文件。

The key points here is that the service should only do most simple and most fast operations. 这里的关键点是该服务应仅执行最简单,最快速的操作。 Since any random delay would mean your realtime-generated data is lost and would never be recovered. 由于任何随机延迟都将意味着您实时生成的数据将丢失,并且将永远无法恢复。 BTW, how can you disable Garbage Collection in Python, so it would not "stop the world" suddenly? 顺便说一句,如何在Python中禁用垃圾收集,这样它才不会突然“停止世界”? Okay, half-joking. 好吧,开玩笑。 But the point is kept. 但是重点仍然存在。 GC is random non-deterministic bogging down of your system, and it is badly compatible with regular non-buffered hardware data acquisition. GC是系统的随机不确定性停滞状态,它与常规的非缓冲硬件数据采集严重兼容。 That primary acquisition of non-buffered data is better be done with most simple=predictable services, and GC is a good global optimization, but the price is it tends to generate sudden local no-service spikes. 使用大多数简单的可预测服务更好地完成非缓冲数据的主要采集,GC是很好的全局优化方法,但代价是它往往会导致突然的本地无服务高峰。

As this all happens with Service #1 with have another one. 由于这一切与服务#1一起发生,因此又有了另一个。

  1. Service #2 keeps monitoring data changes in the folder you use to save primary data. 服务#2将监视数据的更改保持在用于保存主要数据的文件夹中。 It subscribes to "some file was renamed" events and ignores others. 它订阅“某些文件已重命名”事件,而忽略其他事件。 Which service to use? 使用哪种服务? ask Linux guru. 问Linux专家。 iNotify, dNotify, FAM , Gamin, anything of a kind that would suit. iNotify,dNotify, FAM ,Gamin,任何适合的类型。
  2. When Service #2 is awaken with "file was renamed and xxxx is new name" it checks if the new file name ends with "_Complete". 当服务2被唤醒时,“文件已重命名且xxxx是新名称”时,它将检查新文件名是否以“ _Complete”结尾。 If it does not - then that was a bogus event. 如果不是,那是假事件。
  3. When the event is for a new "Phone_...._Complete" file, then it is time to "bulk insert" it into Firebird. 当事件是针对新的“ Phone _...._ Complete”文件时,是时候将其“批量插入”到Firebird中了。 Google for "Firebird bulk insert", for example http://www.firebirdfaq.org/faq209/ Google用于“ Firebird批量插入”,例如http://www.firebirdfaq.org/faq209/
  4. The Service #2 renames "Phone_1_Complete" into "Phone_1_Inserting", so the state of data packet is persisted (as file name). 服务#2将“ Phone_1_Complete”重命名为“ Phone_1_Inserting”,因此数据包的状态得以保留(作为文件名)。
  5. The Service #2 attaches this file into Firebird database as an EXTERNAL TABLE 服务#2将此文件作为EXTERNAL TABLE附加到Firebird数据库中
  6. The Service #2 proceeds with bulk insert, as described above. 如上所述,服务#2进行批量插入。 De-activating indexes, it opens a no-auto-undo transaction and keeps pumping the rows from the External Table into the destination table. 停用索引后,它会打开一个无自动撤消的事务,并不断将行从外部表泵入目标表。 If the service or server crashes here - you have a consistent state: transaction gets rolled back and the file name shows it still is pending to be inserted. 如果服务或服务器在此处崩溃-您处于一致状态:事务将回滚,并且文件名显示该事务仍在等待插入。
  7. When all the rows are pumped - frankly, if Python can work with binary files, it would be a single INSERT-FROM-SELECT, - you commit the transaction, delete the External Table (detaching firebird from the file), then rename the "Phone_1_Inserting" file into "Phone_2_Done" (persisting its changed state) and then you delete it. 抽空所有行后-坦白说,如果Python可以处理二进制文件,则它将是一个INSERT-FROM-SELECT,-提交事务,删除External Table (从文件中删除firebird),然后重命名“将“ Phone_1_Inserting”文件插入“ Phone_2_Done”(保持其更改状态),然后将其删除。
  8. Then the service #2 looks if there are new "_Complete" files already ready in the folder, and if not, it goes into step 1 - sleeps until FAM event would awake it 然后,服务2查找文件夹中是否已经准备好新的“ _Complete”文件,如果没有,则进入步骤1-休眠直到FAM事件将其唤醒

All in all, you should DECOUPLE your services. 总而言之,您应该减少服务质量。

https://en.wikipedia.org/wiki/Coupling_%28computer_programming%29 https://en.wikipedia.org/wiki/Coupling_%28computer_programming%29

The service who's main responsibility is to be with not a tiny pause ready to get and save data flow, and another service whose responsibility transfer the saved data into SQL database for ease of processing and it is not a big issue if it sometimes makes delays for few seconds as long as it does not lose data in the end. 该服务的主要职责是准备好获取和保存数据流的短暂停顿,而另一服务的职责是将已保存的数据传输到SQL数据库中以便于处理,并且有时会因延迟而延迟不大只要几秒钟,它最终不会丢失数据。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM