简体   繁体   English

如何扩展 Redis 队列

[英]How to scale Redis Queue

We are shifting from Monolithic to Microservice Architecture for our e-commerce marketplace application.对于我们的电子商务市场应用程序,我们正在从单体架构转向微服务架构。 We chosen Redis pub/sub for microservice to microservice communication and also for some push notification purpose.我们选择 Redis pub/sub 进行微服务到微服务的通信,也用于某些推送通知目的。 Push notification strategy is like below:推送通知策略如下:

Whenever an order is created (i,e customer creates an order), the backend publishes an event in respective channel (queue) and the specific push-notification-microservice consumes this event (json message) and sends push notification to the seller mobile.每当创建订单(即客户创建订单)时,后端都会在相应的渠道(队列)中发布一个事件,特定的推送通知微服务会使用该事件(json 消息)并将推送通知发送到卖家手机。

For the time being we are using redis-server installed in our ubuntu machine without any hassle.目前我们正在使用安装在我们 ubuntu 机器上的 redis-server,没有任何麻烦。 But the headache is in future when millions of order will be generated in a point of time then how can we handle this situation?但是很头疼的是,未来某个时间点会产生几百万的订单,那么我们该如何处理呢? That means, we need to scale the Redis Queue, right?这意味着,我们需要扩展 Redis 队列,对吧?

My exact clean question (regardless the above scenario) is:我确切的问题(无论上述情况如何)是:

How can I horizontally scale Redis Queue instead of increasing the RAM in same machine?如何水平扩展 Redis 队列而不是增加同一台机器中的 RAM?

Whenever an order is created (i,e customer creates an order), the backend publishes an event in respective channel (queue) and the specific push-notification-microservice consumes this event (json message) and sends push notification to the seller mobile.每当创建订单(即客户创建订单)时,后端都会在相应的渠道(队列)中发布一个事件,特定的推送通知微服务会使用该事件(json 消息)并将推送通知发送到卖家手机。

IIUC you're sending a message over Redis PUB/SUB, that's not durable that means if the only producer is up and other services/consumers are down then consumers will miss messages. IIUC 您正在通过 Redis PUB/SUB 发送消息,这不是持久的,这意味着如果唯一的生产者启动而其他服务/消费者关闭,那么消费者将错过消息。 Any services that are down will lose all those messages that are sent when the said service was down.任何关闭的服务都将丢失在该服务关闭时发送的所有消息。

Now let's assume, you're using Redis LIST and other combinations of data structures to solve the missing events issue.现在让我们假设,您正在使用 Redis LIST 和其他数据结构组合来解决丢失事件问题。

Scaling Redis queue is a little bit tricky since entire data is stored in a list, that resides on a single Redis machine/host.缩放 Redis 队列有点棘手,因为整个数据都存储在一个列表中,该列表驻留在单个 Redis 机器/主机上。 What you can do is create your own partitioning scheme and design your Redis keys as per the partitioning scheme as Redis does internally when we add a new master in the cluster, creating consistent hashing would require some efforts.你可以做的是创建你自己的分区方案并根据分区方案设计你的 Redis 键,就像 Redis 在集群中添加新主节点时在内部所做的那样,创建一致的哈希需要一些努力。

Very simple you can distribute loads based on the userId for example if userId is between 0 and 1000 then use queue_0, 1000-2000 queue_1, and so on.非常简单,您可以根据 userId 分配负载,例如,如果 userId 介于 0 和 1000 之间,则使用 queue_0、1000-2000 queue_1 等等。 This is a manual process that you can be automated using some script.这是一个手动过程,您可以使用一些脚本将其自动化。 Whenever a new queue is added to the set all consumers have to be notified and the publisher will be updated as well.每当将新队列添加到集合中时,都必须通知所有消费者,并且发布者也将更新。

Dividing based on the number is a range partition scheme, you can use a hash partition scheme as well, either you use a range or hash partitioning scheme, whenever a new queue is added to the queue set the consumers must be notified for potential updates.根据数量划分是范围分区方案,您也可以使用 hash 分区方案,无论您使用范围还是 hash 分区方案,每当将新队列添加到队列集中时,必须通知消费者潜在更新。 Consumers can spawn a new worker for the new queue, removing a queue could be tricky as all consumers must have drained their respective queue.消费者可以为新队列生成新的工作人员,删除队列可能很棘手,因为所有消费者都必须耗尽各自的队列。

You might consider using Rqueue您可能会考虑使用Rqueue

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM