简体   繁体   中英

Reliable Webhook dispatching system

I am having a hard time figuring out a reliable and scalable solution for a webhook dispatch system.

The current system uses RabbitMQ with a queue for webhooks (let's call it events ), which are consumed and dispatched. This system worked for some time, but now there are a few problems:

  • If a system user generates too many events, it will take up the queue causing other users to not receive webhooks for a long time
  • If I split all events into multiple queues (by URL hash), it reduces the possibility of the first problem, but it still happens from time to time when a very busy user hits the same queue
  • If I try to put each URL into its own queue, the challenge is to dynamically create/assign consumers to those queues. As far as RabbitMQ documentation goes, the API is very limited in filtering for non-empty queues or for queues that do not have consumers assigned.
  • As far as Kafka goes, as I understand from reading everything about it, the situation will be the same in the scope of a single partition.

So, the question is - is there a better way/system for this purpose? Maybe I am missing a very simple solution that would allow one user to not interfere with another user?

Thanks in advance!

You may experiment several rabbitmq features to mitigate your issue (without removing it completly):

  • Use a public random exchange to split events across several queues. It will mitigate large spikes of events and dispatch work to several consumers.

  • Set some TTL policies to your queues. This way, Rabbitmq may republish events to another group of queues (through another private random exchange for example) if they are not processed fast enough.

You may have several "cycles" of events, varying configuration (ie number of cycles and TTL value for each cycle). Your first cycle handles fresh events the best it can, mitigating spikes through several queues under a random exchange. If it fails to handle events fast enough, events are moved to another cycle with dedicated queues and consumers.

This way, you can ensure that fresh events have a better change to be handled quickly, as they will always be published in the first cycle (and not behind a pile of old events from another user).

If you need order, unfortunatelly you depend on user input.

But in Kafka world, there are a few things to mention here;

  • You can achieve exactly-once delivery with Transactions which allows you to build a similar system like regular AMQPs.
  • Kafka supports partitioning by key. Which allows you to keep processing order of the same keys (in your case userId).
  • Throughput can be increased by tuning all producer, server and consumer sides (batch-size, inflight-requests etc. see Kafka documentation for more parameters).
  • Kafka supports message compression which is reduces network traffic and increases throughtput (just consumes a little more CPU power for fast compression algorithms like LZ4).

Partitions are most important thing in the scenario of yours. You can increase partitions to process more messages in the same time. Your consumers can be as much as your partitions in the same consumer-gorup. Even if you scale after reaching partition count, your new consumers won't be able to read and they will stay unassigned.

Unlike regular AMQP services Kafka does not remove messages after you read it, just marks offsets for consumer-gorup-id. This allows you to do a few things at the same time. Like calculating realtime user count in a separate process.

So, I am not sure if this is the correct way to solve this problem, but this is what I came up with.

Prerequisites: RabbitMQ with deduplication plugin

So my solution involves:

  • g:events queue - let's call it a parent queue. This queue will contain the names of all child queues that need to be processed. Probably it can be replaced with some other mechanism (like Redis sorted Set or something), but you would have to implement ack logic yourself then.
  • g:events:<url> - there are the child queues. Each queue contains only events that are need to be sent out to that url .

When posting a webhook payload to RabbitMQ, you post the actual data to the child queue, and then additionally post the name of the child queue to the parent queue. The deduplication plugin won't allow the same child queue to be posted twice, meaning that only a single consumer may receive that child queue for processing.

All you consumers are consuming the parent queue, and after receiving a message, they start consuming the child queue specified in the message. After the child queue is empty, you acknowledge the parent message and move on.

This method allows for very fine control over which child queues are allowed to be processed. If some child queue is taking too much time, just ack the parent message and republish the same data to the end of the parent queue.

I understand that this is probably not the most effective way (there's also a bit of overhead for constantly posting to the parent queue), but it is what it is.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM