简体   繁体   English

持续将数据(blob)加载到对node.js的POST请求中

[英]Continuously load data (blobs) into a POST request to node.js

I think this should be fairly simple, but I think I am looking into things too much and it's not making sense. 我认为这应该很简单,但是我认为我研究的内容过多,没有任何意义。

What I am currently doing 我目前在做什么

I am creating a web app using Node + React to record audio in the browser. 我正在使用Node + React创建一个Web应用程序以在浏览器中录制音频。 I'm using RecordRTC on the client side to record the audio from the user's microphone. 我在客户端使用RecordRTC录制用户麦克风的音频。 All is fine and dandy, but sometimes it takes a long time to upload the audio file after the user is finished singing. 一切都很好,但是在用户完成唱歌之后,有时需要很长时间才能上载音频文件。 I want to process this file before sending it back to the user in the next step, so speed is critical here as they are waiting for this to occur. 我想先处理此文件,然后再将其发送回用户,因此速度在这里至关重要,因为他们正在等待此文件的发生。

In order to make the experience smoother for my users, I want to kick off the audio upload process as soon as I begin to receive the audio blobs from RecordRTC. 为了使我的用户体验更流畅,我想在开始从RecordRTC接收音频Blob后开始音频上传过程。 I can get access to these blobs as RecordRTC allows me to pass a timeslice value (in ms) and an 'ondatavailable' function, that will get passed a blob every timeslice amount of milliseconds. 我可以访问这些Blob,因为RecordRTC允许我传递一个时间片值(以毫秒为单位)和一个“ ondatavailable”函数,该函数将在每个时间片毫秒内传递一个Blob。

What I have tried 我尝试过的

Currently I have it all easily working with FormData() as I only send the file once the user has finished singing. 目前,我可以轻松地与FormData()一起使用,因为仅在用户完成唱歌后才发送文件。

  1. My first idea was to find an example of something like the Fetch API being used in a manner that resembles what I'm after. 我的第一个想法是找到一个示例,例如以类似于我所追求的方式使用Fetch API。 There are plenty of examples, but all of them treat the source file as already being available, but as I want to continually add blobs as they come (without being able to pre-determine when these might stop coming, as a user may decide to stop the singing process early) this doesn't look promising. 有很多示例,但是所有示例都将源文件视为已经可用,但是我想在它们出现时不断添加blob(无法预先确定何时可能不再出现blob,因为用户可能会决定尽早停止唱歌过程),这看起来不太理想。
  2. I then considered a 'write my own' process whereby many request are made instead of attempting one long continuous style one. 然后,我考虑了一个“自己编写”的过程,在此过程中发出了许多请求,而不是尝试一种长时间的连续样式。 This would involve attaching a unique identifier to each request, and having the server concatenate each chunk together where the ids match. 这将涉及为每个请求附加一个唯一的标识符,并使服务器在ID匹配的位置将每个块连接在一起。 However, I'm not sure how flexible this would be in the future in say a multi-server environment, not to mention handling dropped connections etc, and no real way to tell the server to scrap everything in the case of a user aborted event such as closing the tab/webpage etc. 但是,我不确定将来在多服务器环境中的灵活性如何,更不用说处理掉线的连接等问题了,也没有真正的方法告诉服务器在用户中止事件时取消所有内容例如关闭标签页/网页等。
  3. Finally, I looked into what was available through the likes of NPM etc without success, before conceding that perhaps my Google Fu was letting me down. 最后,在承认也许我的Google Fu让我失望之前,我研究了NPM等类似产品所提供的服务,但没有成功。

What I want 我想要的是

Ideally, I want to create a SINGLE new request once the recording begins, then take the blob every time I receive it in 'ondataavailable', send it to my request (which will pump it through to my server once it receives something) indefinitely. 理想情况下,我想在录制开始后就创建一个SINGLE新请求,然后每次在“ ondataavailable”中接收到该blob时,将其无限期发送到我的请求(一旦收到请求,该请求就会被泵送到我的服务器)。 Once the audio stops (I get this event from RecordRTC as well so can control this), I want to finish/close up my request so that the server knows it can now begin to process the file. 音频停止后(我也可以从RecordRTC获取此事件,因此也可以控制它),我想完成/关闭请求,以便服务器知道它现在可以开始处理文件了。 As part of the uploading process, I also need to pass in a field or two of text data in the body, so this will need to be handled as well. 作为上传过程的一部分,我还需要在正文中传递一两个字段的文本数据,因此也需要对其进行处理。 On the server side, each chunk should be immediately accessible once the server receives it, so that I can begin to create the audio file/append to the audio file on the server side and have it ready for processing almost immediately after the user has finished their singing. 在服务器端,一旦服务器接收到每个块,都应立即可访问它们,以便我可以开始在服务器端创建音频文件/将其附加到音频文件,并在用户完成操作后几乎立即准备好进行处理他们的歌声。

Note: The server is currently set to look for and process multi-part uploads via the multer library on npm, but I am more than happy to change this in order to get the functionality I want. 注意:服务器当前设置为通过npm上的multer库查找和处理多部分上传,但是我很乐意更改此设置以获取所需的功能。

Thanks! 谢谢!

Providing an update for anyone that may stumble upon this question in their own search. 为可能在自己的搜索中偶然发现此问题的任何人提供更新。

We ended up 'rolling our own' custom uploader which, on the client side, sends the audio blobs in chunks of up to 5 1-second blobs to the server. 我们最终“滚动了自己的”自定义上传器,该上传器在客户端将音频Blob(最多5个1秒Blob)的块发送到服务器。 Each request contains a 'request number' which is simply +1 of the previous request number, starting at 1. The reason for sending 5 1-second blobs is RecordRTC, at least at the time, would not capture the final X number of seconds. 每个请求都包含一个“请求号”,该请求号从1开始只是前一个请求号的+1。发送5个1秒Blob的原因是RecordRTC,至少在当时,它不会捕获最后的X秒。 EG. 例如。 If using 5 second blobs instead, a 38 second song would lose the final 3 seconds. 如果改用5秒的Blob,则38秒的歌曲将损失最后3秒。 Upon reaching the end of the recording, it sends a final request (marked with an additional header to let the server know it's the final request). 到达记录末尾时,它将发送最终请求(标有附加标头,以使服务器知道这是最终请求)。 The uploader works in a linked list style to ensure that each previous request has been processed before sending the next one. 上传器以链接列表样式工作,以确保在发送下一个请求之前已处理了每个先前的请求。

On the server side, the 5 blobs are appended into a single 5 second audio blob via FFMPEG. 在服务器端,这5个Blob通过FFMPEG附加到单个5秒音频Blob中。 This does introduce an external dependency but we were already using FFMPEG for much of our application so it was an easy decision. 这确实引入了外部依赖性,但是我们已经在大多数应用程序中使用了FFMPEG,因此这是一个容易的决定。 The produced file has the request number appended to its filename. 产生的文件的文件名后附加了请求号。 Upon receiving the final request, we use FFMPEG again to do a final concatenation of all the received files to get our final file. 收到最终请求后,我们再次使用FFMPEG对所有接收到的文件进行最终连接,以获得最终文件。

On very slow connections, we're seeing time savings upwards of 60+ seconds, so it has significantly improved the app's usability for slower internet connections. 在连接速度非常慢的情况下,我们发现可以节省60秒钟以上的时间,因此,它显着提高了应用程序在速度较慢的互联网连接中的可用性。

If anyone wants the code to use for themselves please PM through here. 如果有人想让代码自己使用,请PM通过此处。 (It's fairly unpolished but I will clean it up before sending) (它还没有打磨过,但是我会在发送之前先清理一下)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM