简体   繁体   English

顺序URL请求的最佳实践Node.js

[英]Best practice for sequential url requests nodejs

I've got a list of urls i need to request from an API, however in order to avoid causing a lot of load i would ideally like to perform these requests with a gap of x seconds. 我有一个需要从API请求的网址列表,但是为了避免造成大量负载,我理想情况下希望以x秒的间隔执行这些请求。 Once all the requests are completed, certain logic that doesnt matter follows. 一旦所有请求完成,就会遵循某些无关紧要的逻辑。

There are many ways to go about it, i've implemented a couple. 有很多方法可以解决,我已经实施了几个。

A) Using a recursive function that goes over an array that holds all the urls and calls itself when each request is done and a timeout has happened A)使用递归函数,该递归函数遍历包含所有URL的数组,并在完成每个请求且发生超时时调用自身

B) Setting timeouts for every request in a loop with incremental delays and returning promises which upon resolution using Promise.all execute the rest of the logic and so on. B)为循环中的每个请求设置超时,并增加延迟,并返回Promise.promise.promise解析后的所有内容,依此类推。

These both work . 这些都有效 However, what would you say is the recommended way to go about this? 但是,您会说这是推荐的方法吗? This is more of an academic type of question and as im doing this to learn i would rather avoid using a library that abstracts the juice. 这更多是一种学术性的问题,因为这样做是为了学习,所以我宁愿避免使用抽象果汁的库。

Your solutions are almost identical. 您的解决方案几乎完全相同。 Thought I would choose a bit different approach. 以为我会选择一种不同的方法。 I would make initial promise and sleep promise function, then I would chain them together. 我将做出最初的承诺和睡眠承诺功能,然后将它们链接在一起。

function sleep(time){
    return new Promise(resolve => setTimeout(resolve, ms));
}

ApiCall()
.then(sleep(1000))
.then(nextApiCall())...

Or more modular version 或更多模块化版本

var promise = Promise.resolve()
myApiCalls.forEach(call => {
    promise = promise.then(call).then(() => sleep(1000))
})

In the end, go with what you understand, what make you most sense and what you will understand in month. 最后,了解您所了解的内容,最有意义的内容以及本月您将了解的内容。 The one that you can read best is you preferred solution, performance won't matter here. 您最能读懂的一个是您首选的解决方案,此处的性能无关紧要。

You could use something like this to throttle per period. 您可以使用类似的方法来限制每个周期。

If you want all urls to be processed even when some fail you could catch the failed ones and pick them out in the result. 如果您希望即使在某些网址失败的情况下也能处理所有网址,则可以捕获失败的网址并从结果中挑选出来。

Code would look something like this: 代码如下所示:

const Fail = function(details){this.details=details;};
twoPerSecond = throttlePeriod(2,1000);
urls = ["http://url1","http://url2",..."http://url100"];
Promise.all(//even though a 100 promises are created only 2 per second will be started
  urls.map(
    (url)=>
      //pass fetch function to twoPerSecond, twoPerSecond will return a promise
      //  immediately but will not start fetch untill there is an available timeslot
      twoPerSecond(fetch)(url)
      .catch(e=>new Fail([e,url]))
  )
)
.then(
  results=>{
    const failed = results.map(result=>(result&&result.constuctor)===Fail);
    const succeeded = results.map(result=>(result&&result.constuctor)!==Fail);
  }
)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM