简体   繁体   English

返回多个异步响应

[英]Returning multiple asynchronous responses

I'm currently looking to set up an endpoint that accepts a request, and returns the response data in increments as they load. 我目前正在寻找建立一个接受请求的端点,并在加载时以递增的方式返回响应数据。

The application of this is that given one upload of data, I would like to calculate a number of different metrics for that data. 这样做的应用是,给定一个数据上传,我想为该数据计算许多不同的指标。 As each metric gets calculated asynchronously, I want to return this metric's value to the front-end to render. 随着每个指标的异步计算,我想将此指标的值返回到前端进行渲染。

For testing, my controller looks as follows, trying to use res.write 为了进行测试,我的控制器如下所示,尝试使用res.write

uploadData = (req, res) => {
   res.write("test");
   setTimeout(() => {
       res.write("test 2");
       res.end();
   }, 3000);
}

However, I think the issue stems from my client-side which I'm writing in React-Redux, and calling that route through an Axios call. 但是,我认为问题源于我在React-Redux中编写的客户端,并通过Axios调用来调用该路由。 From my understanding, it's because the axios request closes once receiving the first response, and the connection doesn't stay open. 据我了解,这是因为axios请求在收到第一个响应后便关闭,并且连接不会保持打开状态。 Here is what my axios call looks like: 这是我的axios调用的样子:

axios.post('/api', data)
  .then((response) => {
      console.log(response);
  })
  .catch((error) => {
       console.log(error);
  });

Is there an easy way to do this? 是否有捷径可寻? I've also thought about streaming, however my concern with streaming is that I would like each connection to be direct and unique between clients that are open for short amount of time (ie only open when the metrics are being calculated). 我也考虑过流式传输,但是我对流式传输的关注是,我希望每个连接在短时间内打开(即仅在计算指标时打开)的客户端之间是直接且唯一的。

I should also mention that the resource being uploaded is a db, and I would like to avoid parsing and opening a connection multiple times as a result of multiple endpoints. 我还应该提到,上载的资源是数据库,并且我想避免由于多个端点而多次解析和打开连接。

Thanks in advance, and please let me know if I can provide any more context 预先感谢,如果可以提供更多背景信息,请告诉我

One way to handle this while still using a traditional API would be to store the metrics in an object somewhere, either a database or redis for example, then just long poll the resource. 在仍然使用传统API的情况下处理此问题的一种方法是将指标存储在某个对象(例如数据库或Redis)中的某个位置,然后长时间轮询资源。

For a real world example, say you want to calculate the following metrics of foo, time completed, length of request, bar, foobar. 对于一个真实的例子,假设您要计算以下foo,度量时间,请求长度,bar和foobar的度量。

You could create an object in storage that looks like this: 您可以在存储中创建一个如下所示的对象:

{
  id: 1,
  lengthOfRequest: 123,
 .....
}

then you would create an endpoint in your API that like so metrics/{id} and would return the object. 那么您将在API中创建一个类似于metrics/{id}的终结点并返回该对象。 Just keep calling the route until everything completes. 只要继续呼叫路由,直到一切完成即可。

There are some obvious drawbacks to this of course, but once you get enough information to know how long the metrics will take to complete on average you can tweak the time in between the calls to your API. 当然,这样做有一些明显的弊端,但是一旦您获得足够的信息来了解度量平均需要花费多长时间,就可以调整两次调用API之间的时间。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM