简体   繁体   中英

Is one large sorted set or many small sorted sets more memory performant in Redis

I'm trying to design a data abstraction for Redis using sorted sets. My scenario is that I would either have ~60 million keys in one large sorted set or ~2 million small sorted sets with maybe 10 keys each. In either scenario the functions I would be using are O(log(N)+M), so time complexity isn't a concern. What I am wondering is what are the trade offs in memory impact. Having many sorted sets would allow for more flexibility, but I'm unsure if the cost of memory would become a problem. I know Redis says it now optimizes memory usage for smaller sorted sets, but it's unclear to me by how much and at what size is too big.

如果数据集超过单个主机内存限制,则拥有许多小的排序集将有助于将负载分散到不同的Redis实例上。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM