简体   繁体   中英

PHP+MySQL is too slow on one machine

I've tried all the suggestions I could find from similar questions, none of them seem to help.

I'm running a PHP script to add some data into the database - only around 1K lines. Here is the full code I'm using:

<?php
header('Content-Type: text/plain');
$t_start = microtime(true);
require('../php/connect.php');
$data = json_decode(file_get_contents('base/en/stringlist.js'));
foreach ($data as $cat => $values) {
  $category = mysql_real_escape_string($cat);
  foreach ($values as $key => $info) {
    $name = mysql_real_escape_string($key);
    $text = mysql_real_escape_string($info->text);
    $tip = mysql_real_escape_string(isset($info->tip) ? $info->tip : '');
    $query = "INSERT INTO locale_strings (name,text,tip,category) VALUES ('$name','$text','$tip','$category')";
    if (!mysql_query($query)) {
      echo mysql_error() . "\n";
    }
  }
}
echo 'Time: ' . round(microtime(true) - $t_start, 4) . ' seconds.';
?>

(I apologize for the mysql_ syntax). I have used this on 3 PCs, all running Win7/8 with a fairly recent XAMPP installation. On two of the machines, this script takes about 2-3 seconds to execute, while on one the third one it times out after 30 seconds (after adding ~900 lines). What could cause it to run 10x slower?

If I comment out the mysql_query line, it takes 0.012 seconds to run, so the code itself isn't the problem. The database is on localhost, and it is included in the etc/hosts file - same as on the other machines.

Here's the table structure:

CREATE TABLE IF NOT EXISTS `locale_strings` (
  `id` int(11) NOT NULL,
  `name` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
  `category` varchar(255) NOT NULL,
  `text` text NOT NULL,
  `locked` int(11) NOT NULL DEFAULT '0',
  `tip` text NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=4833 DEFAULT CHARSET=utf8;

ALTER TABLE `locale_strings`
  ADD PRIMARY KEY (`id`);

ALTER TABLE `locale_strings`
  MODIFY `id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=4833;

The InnoDB storage engine is ACID compliant. Each query runs in autocommit mode and will use input output operation of the disk, which is quite expensive. That means that every query forces the hard disk drive (HDD) to write a record (instead of letting the OS schedule it). Your HDD, if mechanical, has about 300 I/O per second. That means that for about 900-1000 records it can take 2-3 seconds to write them down.

By contrast, using the MyISAM storage engine, as I mentioned, lets the OS schedule the write. The records aren't immediately written to disk; they're buffered in memory before being saved to disk. While this means MyISAM is faster, the trade-off is the risk of data loss (if the system were to crash or lose power while there was data in the buffer which hadn't yet been written to disk).

To optimize this, the usual way is to wrap several inserts into a single transaction. One way is to use PDO , prepare the statement, start transaction and commit it after 1000 inserts have been done. That allows the hard drive to write down a lot of data in one operation.

Your code rewritten using PDO would flow similar to this (note - didn't test, don't copy paste, it's just for reference):

// connect.php - note that the code around PDO should go in the file connect.php as per example
$dsn = 'mysql:dbname=testdb;host=127.0.0.1';
$user = 'dbuser';
$password = 'dbpass';

try
{
    $pdo = new PDO($dsn, $user, $password, array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES 'utf8'"));
    $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
}
catch (PDOException $e)
{
    echo 'Connection failed: ' . $e->getMessage();
}

//===================================================================
// The rest of the code
//===================================================================

// Decode JSON
$data = json_decode(file_get_contents('base/en/stringlist.js'));

// Check if there were any errors decoding JSON
if(empty($data))
{
    echo "Something went wrong, error: " . json_last_error_msg();
    exit;
}

try
{
    // Prepare the statement - it's prepared once, used multiple times
    $stmt = $pdo->prepare("INSERT INTO locale_strings (name, text, tip, category) VALUES (:name, :text, :tip, :category)");

    // Start the transaction

    $pdo->beginTransaction();

    // Loop through the data, bind parameters, execute and when done - commit
    foreach ($data as $cat => $values)
    {
        foreach ($values as $key => $info)
        {
            $stmt->bindValue(':name', $key, PDO::PARAM_STR);
            $stmt->bindValue(':text', $info->text, PDO::PARAM_STR);
            $stmt->bindValue(':tip', isset($info->tip) ? $info->tip : '', PDO::PARAM_STR);
            $stmt->bindValue(':category', $cat, PDO::PARAM_STR);

            $stmt->execute();
        }
    }

    // And finally, tell the HDD to write down the info.
    // Note - for this example, we issued all the contents in a single commit.
    // That might not be the sweet spot so you can optimize later on with checking
    // how many records are optimal go do down to the disk in a single I/O
    $pdo->commit();
}
catch(PDOException $e)
{
    if($pdo->inTransaction())
    {
        $pdo->rollBack();
    }

    // I assume you know how to handle exceptions, messages, trace etc.
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM