简体   繁体   中英

When to use RocksDB and when to use MMFiles storage engine in ArangoDB?

We use ArangoDB to store telco data. The main goal of our application is to let users build a certain types of reports very quickly. The reports are mostly based on the data we get from ArangoDB when we traverse different graphs. The business logic of the reports is not simple which leads to very complex AQL queries with multiple nested traversals (sub-queries).

Quick Overview of the data we store in ArangoDB:

  • 28 collections with documents (the biggest collection consist of 3500K documents, average collection would usually have from 100K to 1000K)
  • 3 collections with edges (335K edges, 3500K edges and 15000K edges)
  • 3 graphs (each graph is linked to one edge collection and the biggest graph has 23 from/to collections)

The overall data set takes about 28 GB of RAM when fully loaded (including indexes).

We have been using MMFiles for almost two years now and were very happy with the results, except for some problems:

  • unprecedented memory consumption which I described here
  • very slow restart (takes 1 hour 30 minutes before the database is fully responsive again)
  • the fact that we have to use very expensive VMs with 64 GB of RAM to be able to fit all the data into the RAM

After some research we started to look into a new RocksDB storage engine. I have read:

From the documents and from the proposed answers on my question about the problem with RAM consumption I can see that RocksDB should be a way to go for us. All the documents say it is new default engine for ArangoDB and it should be used if you want to store more data than fits into the RAM.

I installed new ArangoDB 3.4.1 and converted our database from MMFiles to RocksDB (via arangodumpa and arangorestore ). Then I run some performance tests and found that all traversals became 2-6 times slower compare to what we had with MMFiles engine. Some queries which took 20 seconds with MMFiles engine now take 40 seconds with RocksDB, even if you run the same query multiple times (ie the data mush be already cashed).

Update 2/15/2019:

We run ArangoDB inside of a docker container on m4.4xlarge instance on AWS with 16 vCPU and 64 GB of RAM . We allocated 32 GB of RAM and 6144 CPU units for ArangoDB container. Here is a short summary of our tests (the numbers show the time it took to execute a particular AQL traversal query in HH:mm:ss format):

ArangoDB:MMFiles 与 RocksDB 引擎性能

Note , in this particular table we do not have 10 times performance degradation as I mentioned in my original question. The maximum is 6 times slower when we run AQL right after the restart of ArangoDB (which I guess is OK). But, most of the queries are 2 times slower compare to MMFiles even when you run it a second time when all the data must be already cached in the RAM. The situation is even worse on Windows (it is there I had performance degradation like 10 times and more). I will post the detailed spec of my Windows PC with the performance tests a bit later.

My question is: Is it an expected behavior that AQL traversals are much slower with RocksDB engine? Are there any general recommendations on when to use MMFiles engine and when to use RocksDB engine and in which cases RocksDB is not an option?

With Arangodb 3.7 Support for MMFiles has been dropped, hence this question can be answered with "use rocksdb".

It took us a while to mature the rocksdb based storage engine in ArangoDB, but we now feel confident it fully can handle all loads.

We demonstrate how to work with parts of the rocksdb storage system and which effects they have in this article .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM