简体   繁体   English

表大小会影响INSERT性能吗?

[英]Does table size affect INSERT performance?

This is a question just for the sake of asking: 这个问题仅仅是为了询问:

Barring all intermediate to advanced topics or techniques (clustered indices, BULK INSERTS, export/import tricks, etc.), does an INSERT take longer the larger a table grows? 除了所有中级到高级的主题或技术(聚集索引,BULK INSERTS,导出/导入技巧等),INSERT在表格越大时需要更长的时间吗?

This assumes that there is only one auto-int column, ID [ie, all new rows are INSERTED at the bottom, where no memory has to shuffled to accommodate a specific row positioning]. 这假设只有一个auto-int列,ID [即,所有新行都在底部INSERTED,其中没有内存需要洗牌以适应特定的行定位]。


A link to a good "benchmarking MySQL" would be handy. 一个好的“基准测试MySQL”的链接将是方便的。 I took Oracle in school, and so far the knowledge has done me little good on SO. 我把甲骨文带到学校,到目前为止,这些知识对我没什么用。

Thanks everyone. 感谢大家。

Yes, but it's not the size of the table per se but the size of the indices that matter. 是的,但这不是表格本身的大小,而是重要的指数大小。 Once index rewriting begins to thrash the disk, you'll notice a slowdown. 一旦索引重写开始颠簸磁盘,你会发现速度减慢。 A table with no indexes (of course, you'd never have such a thing in your database) should see no degradation. 没有索引的表(当然,你的数据库中没有这样的东西)应该看不到降级。 A table with minimal compact indexes can grow to a very relatively large size without seeing degradation. 具有最小紧凑索引的表可以增长到非常大的尺寸而不会看到退化。 A table with many large indices will start to degrade sooner. 具有许多大指数的表将开始更快地降级。

My experience has been that performance degrades if the dataset index no longer fits in memory. 我的经验是,如果数据集索引不再适合内存,性能会下降。 Once that happens, checks for duplicate indexes will have to hit disk and it will slow down considerably. 一旦发生这种情况,检查重复索引将不得不命中磁盘,它将大大减慢速度。 Make a table with as much data as you think you'll have to deal with, and do some testing and tuning. 创建一个包含您认为必须处理的数据的表,并进行一些测试和调整。 It's really the best way to know what you'll run into. 这真的是了解你会遇到什么的最好方法。

I can only share my experience. 我只能分享我的经验。 hope it helps. 希望能帮助到你。

I am inserting lots of rows at the time, on huge database (several millions of entries). 我当时在巨大的数据库(数百万条目)上插入了很多行。 I have a script which prints the time before and after I execute the inserts. 我有一个脚本打印执行插入之前和之后的时间。 well I haven't seen any drop in performances. 好吧,我没有看到任何表现下降。

Hope it gave you an idea, but I am on sqlite not on mysql. 希望它给你一个想法,但我在sqlite上没有在mysql上。

The speed is not affected as long as MySQL can update the full index in memory, when it begins to swap out the index it becomes slower. 只要MySQL可以更新内存中的完整索引,速度就不会受到影响,当它开始换出索引时会变慢。 This is what happens if you rebuild an enormous index instantly using ALTER TABLE . 如果使用ALTER TABLE立即重建一个巨大的索引会发生这种情况。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM