简体   繁体   中英

Improving MySQL read time, MySQLdb

I have a table with more than a million record with the following structure:

mysql> SELECT * FROM Measurement;
+----------------+---------+-----------------+------+------+
| Time_stamp     | Channel | SSID            | CQI  | SNR  |
+----------------+---------+-----------------+------+------+
| 03_14_14_30_14 |       7 | open            |   40 |  -70 |
| 03_14_14_30_14 |       7 | roam            |   31 |  -79 |
| 03_14_14_30_14 |       8 | open2           |   28 |  -82 |
| 03_14_14_30_15 |       8 | roam2           |   29 |  -81 |....

I am reading data from this table into python for plotting. The problem is, the MySQL reads are too slow and it is taking me hours to get the plots even after using MySQLdb.cursors.SSCursor (as suggested by a few in this forum) to quicken up the task.

con = mdb.connect('localhost', 'testuser', 'conti', 'My_Freqs', cursorclass = MySQLdb.cursors.SSCursor);
cursor=con.cursor()
cursor.execute("Select Time_stamp FROM Measurement")
for row in cursor:
    ... Do processing ....

Will normalizing the table help me in speeding up the task? If so, How should i normalize it?

PS: Here is the result for EXPLAIN

+------------+--------------+------+-----+---------+-------+
| Field      | Type         | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+-------+
| Time_stamp | varchar(128) | YES  |     | NULL    |       |
| Channel    | int(11)      | YES  |     | NULL    |       |
| SSID       | varchar(128) | YES  |     | NULL    |       |
| CQI        | int(11)      | YES  |     | NULL    |       |
| SNR        | float        | YES  |     | NULL    |       |
+------------+--------------+------+-----+---------+-------+

The problem is probably that you are looping over the cursor instead of just dumping out all the data at once and then processing it. You should be able to dump out a couple million rows in a couple/few seconds. Try to do something like

cursor.execute("select Time_stamp FROM Measurement")
data = cusror.fetchall()
for row in data: 
   #do some stuff...

Well, since you're saying the whole table has to be read, I guess you can't do much about it. It has more than 1 million records... you're not going to optimize much on the database side.

How much time does it take you to process just one record? Maybe you could try optimizing that part. But even if you got down to 1 millisecond per record, it would still take you about half an hour to process the full table. You're dealing with a lot of data.

Maybe run multiple plotting jobs in parallel? With the same metrics as above, dividing your data in 6 equal-sized jobs would (theoretically) give you the plots in 5 minutes.

Do your plots have to be fine-grained? You could look for ways to ignore certain values in the data, and generate a complete plot only when the user needs it (wild speculation here, I really have no idea what your plots look like)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM