简体   繁体   English

Redis是一个较大的排序集还是多个较小的排序集,具有更高的内存性能

[英]Is one large sorted set or many small sorted sets more memory performant in Redis

I'm trying to design a data abstraction for Redis using sorted sets. 我正在尝试使用排序集为Redis设计数据抽象。 My scenario is that I would either have ~60 million keys in one large sorted set or ~2 million small sorted sets with maybe 10 keys each. 我的情况是,要么在一个大型排序集中拥有约6000万个密钥,要么在每个可能有10个密钥的情况下拥有约200万个小型排序的集合。 In either scenario the functions I would be using are O(log(N)+M), so time complexity isn't a concern. 在这两种情况下,我将使用的函数都是O(log(N)+ M),因此时间复杂度不是问题。 What I am wondering is what are the trade offs in memory impact. 我想知道的是在内存影响方面需要权衡的因素。 Having many sorted sets would allow for more flexibility, but I'm unsure if the cost of memory would become a problem. 具有许多排序集将提供更大的灵活性,但是我不确定内存成本是否会成为问题。 I know Redis says it now optimizes memory usage for smaller sorted sets, but it's unclear to me by how much and at what size is too big. 我知道Redis表示它现在可以为较小的排序集优化内存使用,但是我不清楚它的大小和大小太大。

如果数据集超过单个主机内存限制,则拥有许多小的排序集将有助于将负载分散到不同的Redis实例上。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM