简体   繁体   English

MongoDB写入性能

[英]MongoDB write performance

Given, 20M documents with each average of 550bytes and PHP driver on a single machine. 给定的是,在一台计算机上平均拥有550bytes的 20M文档PHP驱动程序

First insert (not mongoimport) with journal on, WriteConcern to default (1) . 首先插入 (不是mongoimport), journal on, WriteConcern to default (1) Took about 12 hours . 花了大约12个小时 Then it made me wonder, so I tried the second import. 然后让我感到奇怪,所以我尝试了第二次导入。

Second, I used batchInsert() with --nojournal and WriteConcern=0 and I took noted the performance. 其次,我将batchInsert()--nojournal and WriteConcern=0并记下了性能。 In total it TOO took 12 hours ?! 总共花了12个小时 What was interesting what started to be 40000 records being inserted per minute it ended up being 2500 records per minutes and I can only imagine it would have been 100 records per minute towards the end. 有趣的是,开始每分钟插入40000 records ,但最终却是每分钟2500 records ,我只能想象到40000 records结束时每分钟要插入100 records

My questions are: 我的问题是:

  1. I assumed by turning journal off and make w=0 and use batchInsert() my total insertion should drop significantly! 我假设通过关闭日记功能并使w = 0并使用batchInsert(),使总插入量显着下降!
  2. How is the significant drop of inserts per minutes is explained? 如何解释每分钟插入次数的显着下降?

--UPDATE-- -更新-

Machine is Core Duo 3GHz, with 8GB of RAM. 机器是Core Duo 3GHz,具有8GB RAM。 RAM usage stays steady at %50 during whole process. 在整个过程中,RAM使用率稳定在%50。 CPU usage however goes high. 但是,CPU使用率很高。 In PHP I have ini_set('memory_limit', -1) to not limit the memory usage. 在PHP中,我有ini_set('memory_limit', -1)来不限制内存使用。

If it only one time migration, I would suggest you to delete all indexes before these inserts. 如果仅一次迁移,则建议您在插入之前删除所有索引。 Using deleteIndex(..) method. 使用deleteIndex(..)方法。

After all inserts finished use isureIndex(..) to get the indexes back. 完成所有插入操作后,使用isureIndex(..)取回索引。

PS. PS。 From numbers you provided, it is not a big amount of data, probably you have mis-configured the MongoDB Server. 从您提供的数字来看,它不是大量数据,可能是您错误配置了MongoDB服务器。 Please provide your MongoDB Server config and Memory size, maybe I could find something else to improve. 请提供您的MongoDB Server配置和内存大小,也许我可以找到其他需要改进的地方。

Replying to your (2) question, probably your server is luck of memory after some inserts. 回答您的(2)问题,可能是您的服务器在进行某些插入操作后运气不足。

After a lot of hair pulling, I realized the backlog effect. 经过大量的拔头发,我意识到积压的效果。 Interesting enough when I noundled my documents to 5000 rows, batch insert worked like magic and imported in just under 4 minutes !! 当我将文档排成5000行时,很有趣,批量插入就像魔术一样工作,并且在不到4分钟的时间内导入了!

This tool gave me the idea: https://github.com/jsteemann/BulkInsertBenchmark 这个工具给了我一个主意: https : //github.com/jsteemann/BulkInsertBenchmark

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM