简体   繁体   中英

How to insert a new record if old record is updated?

I am using mysql as my database for a php application.

I have to parse a csv sheet, and insert the data into the database only if old record is updated.

One way is to get the records from database using Ids I have in my csv and then check for values, if there is a difference then add a new record, but because I have data in hundreds of MBs I cannot do this back and forth from database, is there a way of doing it completely in sql ?

Id isn't unique, the new record that has to be inserted will use the same id.

For eg following is the current record
| 1001 | M Danish | Singapore |

and country changes to USA, table will have two rows as follows
| 1001 | M Danish | Singapore |
| 1001 | M Danish | USA |

正如我从您的问题中了解到的那样,您可以在将csv数据插入数据库检查该记录之前,对数据库表名称的“更新”值0或1(false或true)进行另一个原始处理,然后将其作为值(false或true)进行操作)

Roundtrips to the DB are typically quite expensive in terms of relative costs. When facing this type of situation, I usually try to store a local map (ie, a PHP array with string keys) with the values to compare, allowing me to only roundtrip the updates/inserts required by the DB.

Here's an overly-simplified example for the sake of illustration:

// variable created in php file from previous run
$records = [
    "1001 | M Danish | Singapore" => true
    // ... other records
];
// check if value present, a constant time operation on a map
if (!isset($records["1001 | M Danish | USA"])) {
    // insert into db
}

Of note, the above example does not iterate through all the records, handle duplicate keys, deletion of old keys, etc. However, hopefully it gives you the general idea for greatly reducing the DB roundtrips (or the overall size of one roundtrip) by doing some quick work in PHP before making the query.

Add to your table a auto increment Id. Then in php run a query to select the last Id that match your row in csv. Compare the two and insert if there are differences. This is the most efficient way I can think of with your table structure.

I would create a table with data that do not change and a table with duplicates id(csv Id) where you will insert only on changes. This will make things a lot easier and quicker for you. The second table will have an auto increments Id to check for the last row with the same Id from csv.

Hope it's clear.

you can run insert ignore on duplicate key update sql. This will work only if you have a unique key define in the column/s you want to be unique

insert ignore into table1(col1, col2) 
values ('val1', 'val2')
on duplicate key update
col1 = VALUES(`col1`),
col2 = VALUES(`col2`)

this will update row with values val1, val2, or insert if the row wa not found

if you have a lot of inserts/updates you can use this with bulks

insert ignore into table1(col1, col2) 
values 
('val1', 'val2'),
('val3', 'val4'),
('val5', 'val6'),
('val7', 'val8'),
('val9', 'val10'),
('val11', 'val12'),
('val13', 'val14')
on duplicate key update
col1 = VALUES(`col1`),
col2 = VALUES(`col2`)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM