[英]Insert data to MySQL table every second (one time per second)
i'm new in Python, Raspberry Pi and MySQL and i hope that you can help me. 我是Python,Raspberry Pi和MySQL的新手,我希望你能帮助我。 I'm trying to write a script in Python that could insert data every second into a MySQL table. 我正在尝试用Python编写一个脚本,它可以每秒将数据插入到MySQL表中。 I can insert data but is not periodical like i want, I already tried a lot and i can't find the solution to my problem. 我可以插入数据,但不像我想要的周期性,我已经尝试了很多,我找不到我的问题的解决方案。 Here goes my Python code and the data inserted into the MySQL table: 这是我的Python代码和插入MySQL表的数据:
#!/usr/bin/env python
import MySQLdb
import time
while True:
db = MySQLdb.connect("localhost", "mauro", "12345", "temps")
curs=db.cursor()
try:
curs.execute ("""INSERT INTO thetemps
values(0, CURRENT_DATE(), NOW(), 28)""")
db.commit()
print "Data committed"
except:
print "Error"
db.rollback()
db.close()
time.sleep(1)
+-----+------------+----------+------+
| id | date | time | temp |
+-----+------------+----------+------+
| 107 | 2015-11-06 | 19:16:41 | 28 |
| 108 | 2015-11-06 | 19:16:42 | 28 |
| 109 | 2015-11-06 | 19:16:45 | 28 |
| 110 | 2015-11-06 | 19:16:46 | 28 |
| 111 | 2015-11-06 | 19:16:47 | 28 |
| 112 | 2015-11-06 | 19:16:48 | 28 |
| 113 | 2015-11-06 | 19:16:56 | 28 |
| 114 | 2015-11-06 | 19:17:00 | 28 |
| 115 | 2015-11-06 | 19:17:03 | 28 |
| 116 | 2015-11-06 | 19:17:05 | 28 |
| 117 | 2015-11-06 | 19:17:06 | 28 |
| 118 | 2015-11-06 | 19:17:07 | 28 |
| 119 | 2015-11-06 | 19:17:08 | 28 |
| 120 | 2015-11-06 | 19:17:09 | 28 |
| 121 | 2015-11-06 | 19:17:10 | 28 |
| 122 | 2015-11-06 | 19:17:11 | 28 |
+-----+------------+----------+------+
As you can see, sometimes the scrip insert data periodicaly, and sometimes we have 8 seconds of interval between the data. 正如您所看到的,有时候脚本会定期插入数据,有时我们在数据之间有8秒的间隔。 So, my question is: is possible to the interval between the data be 1 second every time? 所以,我的问题是:每次数据之间的间隔可能是1秒吗? What am i doing wrong? 我究竟做错了什么? Sorry for the bad english and thanks in advance! 对不起英语不好,提前谢谢!
You're establishing a new connection to the database server on each iteration. 您正在每次迭代时建立与数据库服务器的新连接。 This can take arbitrary amount of time. 这可能需要任意时间。 Moving .connect()
, etc. outside of the loop may give you more consistent timings: 在循环外移动.connect()
等可以为您提供更一致的时序:
db = MySQLdb.connect("localhost", "mauro", "12345", "temps")
curs = db.cursor()
while True:
try:
curs.execute ("""INSERT INTO thetemps
values(0, CURRENT_DATE(), NOW(), 28)""")
db.commit()
print "Data committed"
except:
print "Error"
db.rollback()
time.sleep(1)
db.close()
在尝试插入新行时,不要使用事务,也许某些表被锁定。
is possible to the interval between the data be 1 second every time? 数据间隔可能每次都是1秒?
Theoretically, yes, but in practice there're too many other factors outside of your control that are more likely to get in the way. 理论上,是的,但在实践中,有太多其他因素超出你的控制范围,更有可能妨碍你。 Some of these include, but are not limited to: 其中一些包括但不限于:
This means that even if your system was idle most of the time, the time.sleep(1)
is not guaranteed to always sleep for exactly 1 second, and even if it did, the system may've been doing something else (eg more I/O) and require different amounts of time to perform the same operations every time. 这意味着,即使你的系统是空闲的大部分时间中, time.sleep(1)
不能保证总是睡了整整 1秒,即使这样做了,系统may've一直在做别的事情(例如多I / O)并且每次都需要不同的时间来执行相同的操作。
Also, instead of creating a new connection every time inside the loop, you should keep the connection open and save the overhead. 此外,不是每次在循环内创建新连接,而是应保持连接打开并节省开销。
What am i doing wrong? 我究竟做错了什么?
I don't think you're doing anything particularly wrong here. 我不认为你在这里做任何特别错的事。 The code looks OK, except for the extra overhead of creating a new connection every time --which you shouldn't . 代码看起来没问题, 除了每次创建新连接的额外开销 - 你不应该这样做 。 That aside, the issue here boils down to factors outside of your control. 除此之外,这里的问题归结为你无法控制的因素。
That being said, there're some things you can do to improve your chances. 话虽这么说,你可以采取一些措施来提高你的机会。
In addition to avoiding the overhead of opening/closing the database connection on every iteration, you should check the storage engine used for the table. 除了避免在每次迭代时打开/关闭数据库连接的开销之外,还应检查用于表的存储引擎。 For example, depending on your MySQL version, the default might still be MyISAM
, which requires table locking for writing. 例如,根据您的MySQL版本,默认值可能仍然是MyISAM
,这需要表锁定才能写入。
In contrast, InnoDB
only requires row locking when writing to the table, which should improve things if something else is using the table. 相比之下, InnoDB
在写入表时只需要行锁定,如果其他东西正在使用表,这应该会改进。 If you find you're not using InnoDB
, issue an alter table ...
query to change the storage engine. 如果您发现自己没有使用InnoDB
,请发出alter table ...
query来更改存储引擎。
Transactions are meant to group a set of 2 or more queries as a single unit, but you're submitting individual queries. 事务旨在将一组 2个或更多查询分组为一个单元,但您要提交单个查询。 Instead, you should configure MySQL to have automatic commits enabled, so that it doesn't have to wait for an explicit commit
request after your query is submitted and executed, saving some communication overhead between the server and your client. 相反,您应该将MySQL配置为启用自动提交,以便在提交和执行查询后不必等待显式commit
请求,从而节省服务器和客户端之间的一些通信开销。
You can set a higher priority for your program in order for the scheduler to be more helpful here. 您可以为程序设置更高的优先级,以便调度程序在此处更有用。 It might also help doing the same thing for the database service/process. 它也可能有助于为数据库服务/进程做同样的事情。
Other user -level tasks could also have their priorities lowered a bit, if necessary. 如有必要,其他用户级别的任务也可以降低其优先级。
尝试在while条件之前创建与db的连接以保持打开连接。
The problem I see is that connecting + inserting takes time, this will add up and your process will eventually behind. 我看到的问题是连接+插入需要时间,这将加起来,你的过程最终会落后。
What I would do is separate the data gathering (make sure you read your temperatures every second) from the data loading (data loading can take longer than a second if needed, but you won't fall behind). 我要做的是从数据加载中分离数据收集(确保你每秒读取你的温度)(如果需要,数据加载可能需要超过一秒,但你不会落后)。
So, if I were you, I'd have two separate scripts running in parallel and communicating through some simple, fast and reliable mechanism. 所以,如果我是你,我将有两个独立的脚本并行运行,并通过一些简单,快速和可靠的机制进行通信。 A list in Redis would probably work great. Redis中的列表可能会很好用。 You could also use something like ZMQ . 你也可以使用像ZMQ这样的东西 。
Something like this: 像这样的东西:
# gather.py
while True:
temp = get_temp()
redis.lpush('temps', pickle.dumps([temp, datetime.now]))
time.sleep(1)
and.. 和..
# load.py
while True:
# Block until new temps are available in redis
data, key = redis.brpop('temps')
# Get all temps queued
datapoints = [data] + redis.rpop('temps'):
db = MySQLdb.connect("localhost", "mauro", "12345", "temps")
curs=db.cursor()
try:
for p in datapoints:
point = pickle.loads(p)
curs.execute ("INSERT INTO thetemps (temp, timestamp) values({}, {})".format(point[0], point[1])
db.commit()
print "Data committed"
except:
print "Error"
db.rollback()
You could add some improvements on the above like reusing the db connection, making sure you don't lose the temps if there's a DB error, using a namedtuple instead of an array for the datapoints, etc. 您可以在上面添加一些改进,例如重用数据库连接,确保在出现数据库错误时没有丢失临时值,使用namedtuple而不是数据点的数组等。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.