简体   繁体   中英

Insert 50 thousand record in MySQL

I want to insert 50 000 records into MySQL through a web service written in Java but only 20 000 records are getting inserted.

I dont think there is size (number of record ) limition in my sql.

is there something where i can Insert/Select 50k records in a single go (bulk)

Split it into multiple transaction, don't insert the 50k records in a row. I think that's the problem.

Edit : As it is a webservice, maybe during the transfer the connexion is broken. Please ensure that it is not the case =).

Answer to OP's comment : Instead of

INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
... 49 990 INSERT later
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)

do

BEGIN TRANSACTION my_beloved_transaction
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
... 2k INSERT later
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
COMMIT TRANSACTION my_beloved_transaction

BEGIN TRANSACTION my_beloved_transaction
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
... 2k INSERT later
INSERT (......) INTO table (...)
INSERT (......) INTO table (...)
COMMIT TRANSACTION my_beloved_transaction

etc...

I don't know how you're doing your insert, but you could just loop through what you want to insert, and at every, say 5000 records, insert that batch using the web service, and then proceed on to the next batch until you're done. So in this example you'd be doing 10 calls to the web service, each with 5000 records.

Check on the use of MySQL transactions so you can stop this if anything goes wrong with a batch (I haven't used those myself in MySQL, so I can't help with that part.)

Unless this is a quick and dirty proof of concept it shouldn't matter that its a web service. The web service is just the external interface.

You should approach this as a MySQL/JDBC issue. If you need all or non of the inserts to succeed you need a single long running transaction, probably with a bulk insert.

The web service issue should be separate - you may well be worried about whether the client can wait for the inserts to complete for confirmation making it synchronous or whether you need to call back. That's an issue with the web service design. Decouple and treat the two separately.

Are you checking errors when the query fails. Is it possible you are running up against the max_allowed_packet size for your server? I'm not sure what the behavior is with bulk inserts that aren't in transactions, but it can cause unusual errors with large SQL statements:

http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html

Maybe a memory issue? Try using a PreparedStatement with the addBatch() command and do your commits in batches:

PreparedStatement stmt = prepareStatement(...);
int count = 0;
for (MyObject eachData : dataList) {
    stmt.setObject(1, eachData.getDate());
    stmt.setBigDecimal(2, eachData.getValue1());
    stmt.setBigDecimal(3, eachData.getValue2());
    stmt.addBatch();
    if (count++ > 100) {// flush the batch periodically, so batches don't get too large
        int[] ints = stmt.executeBatch();
        log.log(Level.INFO, "Inserted " + ints.length + " new records");
        stmt.clearBatch();
        count = 0;
    }
}
final int[] ints = stmt.executeBatch();
log.log(Level.INFO, "Inserted " + ints.length + " new records");

Chances are, it's the implementation of the bulk/batch insert/update process that's causing the limitations. If you had more data in each row, then you would find it dying with fewer rows being insertted.

Try doing a subset at a time with multiple batch/bulk inserts.

you can use load infile command of mysql. first write all data in one text file then load in database using load infile command, it will take very less time and best way to insert large record.

When you run in a transaction, a data has to keep a rollback segment just in case the transaction fails. Disk and memory are associated with this log, so there must be a limit set. I'd check the defaults and see if perhaps you've exceeded one or both.

The benefit of committing smaller batches is that the rollback segment gets reset back to zero each time. That's why chunking it into smaller batches helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM