This routine used to work until we moved to a much faster debian linux server. This is a snippet of code that reads through a csv file and inserts records into a table if another record with the same manufacturers_name dosn't exist. What is happening is the first record is inserted and when another record is found the execute function fails to find the previous inserted record and adds the record instead of updating. If I run the same routine a 2nd time without emptying the table all the records are found and only update takes place. I thought maybe the issue was speed so I tried putting sleeps in the process but it had no affect. Any ideas? Thanks
while (($data = fgetcsv($handle, 0, chr(9),chr(0))) !== FALSE) {
****** Setup stuff here *******
$msql_data_array = array('manufacturers_name' => $data[$manufacturer_sn]);
$sql = "select count(*) as total,manufacturers_id,manufacturers_name from " . TABLE_MANUFACTURERS . " where manufacturers_name = \"" . $data[$manufacturer_sn] . "\"";
$manufacturers = $db->Execute($sql);
if ($manufacturers->fields['total'] == 0) { <<<<<<<<<<<< This always returns 0 even if the record was just added by the previous operation(s)
***** Inserts a new record ******
} else {
***** Updates the current record *******
}
]
This looks like a simple .csv import. You could replace the PHP code with a command line MySQL call, like so:
mysqlimport --fields-terminated-by '\\t' db_name import.csv
Ref: http://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.