简体   繁体   English

限制请求的速率限制算法

[英]Rate limiting algorithm for throttling request

I need to design a rate limiter service for throttling requests. 我需要为限制请求设计一个速率限制器服务。 For every incoming request a method will check if the requests per second has exceeded its limit or not. 对于每个传入请求,方法将检查每秒请求是否超过其限制。 If it has exceeded then it will return the amount of time it needs to wait for being handled. 如果已超过,则返回等待处理所需的时间。

Looking for a simple solution which just uses system tick count and rps(request per second). 寻找一个简单的解决方案,它只使用系统滴答计数和rps(每秒请求数)。 Should not use queue or complex rate limiting algorithms and data structures. 不应使用队列或复杂的速率限制算法和数据结构。

Edit: I will be implementing this in c++. 编辑:我将在c ++中实现它。 Also, note I don't want to use any data structures to store the request currently getting executed. 另外,请注意我不想使用任何数据结构来存储当前正在执行的请求。 API would be like: API就像:

if (!RateLimiter.Limit()) { do work RateLimiter.Done(); if(!RateLimiter.Limit()){do work RateLimiter.Done();

} else reject request 否则拒绝请求

The most common algorithm used for this is token bucket . 用于此的最常见算法是令牌桶 There is no need to invent a new thing, just search for an implementation on your technology/language. 没有必要发明新东西,只需搜索您的技术/语言的实现。

If your app is high avalaible / load balanced you might want to keep the bucket information on some sort of persistent storage. 如果您的应用程序具有高可用性/负载平衡性,您可能希望将存储桶信息保留在某种持久性存储上。 Redis is a good candidate for this. Redis是一个很好的候选人。

I wrote Limitd is a different approach, is a daemon for limits. 我写的Limitd是一种不同的方法,是限制的守护进程。 The application ask the daemon using a limitd client if the traffic is conformant. 如果流量符合要求,应用程序会使用限制客户端询问守护程序。 The limit is configured on the limitd server and the app is agnostic to the algorithm. 限制在限制服务器上配置,应用程序与算法无关。

since you give no hint of language or platform I'll just give out some pseudo code.. 因为你没有给出任何语言或平台的暗示,我只会给出一些伪代码。

things you are gonna need 你需要的东西

  • a list of current executing requests 当前执行请求的列表
  • a wait to get notified where a requests is finished 等待通知请求完成的地方

and the code can be as simple as 代码可以很简单

var ListOfCurrentRequests; //A list of the start time of current requests
var MaxAmoutOfRequests;// just a limit
var AverageExecutionTime;//if the execution time is non deterministic the best we can do is have a average

//for each request ether execute or return the PROBABLE amount to wait
function OnNewRequest(Identifier)
{
    if(count(ListOfCurrentRequests) < MaxAmoutOfRequests)//if we have room 
    {
        Struct Tracker
        Tracker.Request = Identifier;
        Tracker.StartTime = Now; // save the start time
        AddToList(Tracker) //add to list
    }
    else
    {
        return CalculateWaitTime()//return the PROBABLE time it will take for a 'slot' to be available
    }
}
//when request as ended release a 'slot' and update the average execution time
function OnRequestEnd(Identifier)
{
    Tracker = RemoveFromList(Identifier);
    UpdateAverageExecutionTime(Now - Tracker.StartTime);
}

function CalculateWaitTime()
{
    //the one that started first is PROBABLY the first to finish
    Tracker = GetTheOneThatIsRunnigTheLongest(ListOfCurrentRequests);
    //assume the it will finish in avg time
    ProbableTimeToFinish = AverageExecutionTime - Tracker.StartTime;
    return ProbableTimeToFinish
}

but keep in mind that there are several problems with this 但请记住,这有几个问题

  • assumes that by returning the wait time the client will issue a new request after the time as passed. 假设通过返回等待时间,客户端将在传递的时间之后发出新请求。 since the time is a estimation, you can not use it to delay execution, or you can still overflow the system 由于时间是一个估计,你不能用它来延迟执行,或者你仍然可以溢出系统
  • since you are not keeping a queue and delaying the request, a client can be waiting for more time that what he needs. 由于您没有保留队列并延迟请求,因此客户可以等待他需要的更多时间。
  • and for last, since you do not what to keep a queue, to prioritize and delay the requests, this mean that you can have a live lock , where you tell a client to return later, but when he returns someone already took its spot, and he has to return again. 最后,因为你没有保留队列,优先考虑和延迟请求,这意味着你可以有一个实时锁定 ,你告诉客户稍后返回,但当他返回某人已经取得了它的位置,他必须再次回来。

so the ideal solution should be a actual execution queue, but since you don't want one.. I guess this is the next best thing. 所以理想的解决方案应该是一个实际的执行队列,但是因为你不想要一个...我想这是下一个最好的事情。

according to your comments you just what a simple (not very precise) requests per second flag. 根据你的评论你只是一个简单的(不是非常精确的)每秒请求标志。 in that case the code can be something like this 在这种情况下,代码可以是这样的

var CurrentRequestCount;
var MaxAmoutOfRequests;
var CurrentTimestampWithPrecisionToSeconds
function CanRun()
{
    if(Now.AsSeconds > CurrentTimestampWithPrecisionToSeconds)//second as passed reset counter
        CurrentRequestCount=0;

    if(CurrentRequestCount>=MaxAmoutOfRequests)
        return false;

    CurrentRequestCount++
    return true;
}

doesn't seem like a very reliable method to control whatever.. but.. I believe it's what you asked.. 似乎不是一个非常可靠的方法来控制什么..但..我相信这是你问的..

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM