简体   繁体   English

ArangoDB 键/值 Model:值最大大小

[英]ArangoDB Key/Value Model: value maximum size

With regard to the Key/Value model of ArangoDB, does anyone know the maximum size per Value?关于 ArangoDB 的键/值 model,有人知道每个值的最大大小吗? I have spent hours searching the Internet for this information but to no avail;我花了几个小时在互联网上搜索这些信息,但无济于事; you would think that this is a classified information.你会认为这是机密信息。 Thanks in advance.提前致谢。

The answer depends on different things, like the storage engine and whether you mean theoretical or practical limit.答案取决于不同的事物,例如存储引擎以及您的意思是理论限制还是实际限制。

In case of MMFiles, the maximum document size is determined by the startup option wal.logfile-size if wal.allow-oversize-entries is turned off.对于 MMFiles,如果wal.allow-oversize-entries关闭,则最大文档大小由启动选项wal.logfile-size确定。 If it's on, then there's no immediate limit.如果打开,则没有立即限制。

In case of RocksDB, it might be limited by some of the server startup options such as rocksdb.intermediate-commit-size , rocksdb.write-buffer-size , rocksdb.total-write-buffer-size or rocksdb.max-transaction-size .对于 RocksDB,它可能会受到一些服务器启动选项的限制,例如rocksdb.intermediate-commit-sizerocksdb.write-buffer-sizerocksdb.total-write-buffer-sizerocksdb.max-transaction-size

When using arangoimport to import a 1GB JSON document, you will run into the default batch-size limit.使用 arangoimport 导入 1GB JSON 文档时,您将遇到默认的batch-size限制。 You can increase it, but appears to max out at 805306368 bytes (0.75GB).您可以增加它,但似乎最大为 805306368 字节 (0.75GB)。 The HTTP API seems to have the same limitation ( /_api/cursor with bindVars). HTTP API 似乎具有相同的限制( /_api/cursor与 bindVars)。

What you should keep in mind: mutating the document is potentially a slow operation because of the append-only nature of the storage layer.您应该记住:由于存储层的仅附加性质,改变文档可能是一个缓慢的操作。 In other words, a new copy of the document with a new revision number is persisted and the old revision will be compacted away some time later (I'm not familiar with all the technical details, but I think this is fair to say).换句话说,具有新修订号的文档的新副本被保留,并且旧修订将在一段时间后被压缩(我不熟悉所有技术细节,但我认为这是公平的说法)。 For a 500MB document is seems to take a few seconds to update or copy it using RocksDB on a rather strong system.对于一个 500MB 的文档,在相当强大的系统上使用 RocksDB 更新或复制它似乎需要几秒钟。 It's much better to have many but small documents.拥有许多但很小的文档要好得多。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM