简体   繁体   English

使用 MediaRecorder 录制 5 秒的音频片段,然后上传到服务器

[英]Record 5 seconds segments of audio using MediaRecorder and then upload to the server

I want to record user's microphone 5 seconds long segments and upload each to the server.我想录制用户麦克风 5 秒长的片段并将每个片段上传到服务器。 I tried using MediaRecorder and I called start() and stop() methods at 5 seconds time interval, but when I concatenate these recordings there is a "drop" sound between.我尝试使用 MediaRecorder 并以 5 秒的时间间隔调用 start() 和 stop() 方法,但是当我连接这些录音时,它们之间会发出“滴落”的声音。 So I tried to record 5 seconds segments using timeslice parameter of start() method:所以我尝试使用 start() 方法的时间片参数记录 5 秒段:

navigator.mediaDevices.getUserMedia({ audio: { channelCount: 2, volume: 1.0, echoCancellation: false, noiseSuppression: false } }).then(function(stream) {
  const Recorder = new MediaRecorder(stream, { audioBitsPerSecond: 128000, mimeType: "audio/ogg; codecs=opus" });
  Recorder.start(5000); 
  Recorder.addEventListener("dataavailable", function(event) {
    const audioBlob = new Blob([event.data], { type: 'audio/ogg' });
    upload(audioBlob);
  });
});

But only the first segment is playable.但只有第一段是可播放的。 What can I do, or how can I make all blobs playable?我能做什么,或者我怎样才能让所有的 blob 都可以玩? I MUST record then upload each segment.我必须记录然后上传每个片段。 I CAN'T make an array of blobs (because the user could record 24hours of data or even more and the data needs to be uploaded on the server while the user is recording - with a 5 seconds delay).我不能制作一个 blob 数组(因为用户可以记录 24 小时甚至更多的数据,并且需要在用户记录时将数据上传到服务器上 - 有 5 秒的延迟)。

Thank you!谢谢!

You have to understand how media files are built.您必须了解媒体文件是如何构建的。
It is not only some raw data that can be converted to either audio or video directly.不仅仅是一些原始数据可以直接转换为音频或视频。

It will depend on the format chosen, but the basic case is that you have what is called metadata which are like a dictionary describing how the file is structured.这将取决于选择的格式,但基本情况是您拥有所谓的元数据,它就像描述文件结构的字典。

These metadata are necessary for the software that will then read the file to know how it should parse the actual data that is contained in the file.这些元数据对于随后将读取文件以了解它应该如何解析文件中包含的实际数据的软件来说是必需的。

The MediaRecorder API is in a strange position here, since it must be able to at the same time write these metadata , and also add non-determined data (it is a live recorder). MediaRecorder API 在这里处于一个奇怪的位置,因为它必须能够同时写入这些元数据,并且还添加不确定数据(它是一个实时记录器)。

So what happens is that browsers will put the main metadata at the beginning of the file, in a way that they'll be able to simply push new data to the file, and still be a valid file (even though some info like duration will be missing).所以会发生什么是浏览器将主要元数据放在文件的开头,这样他们就可以简单地将新数据推送到文件中,并且仍然是一个有效文件(即使一些信息,如持续时间会失踪)。

Now, what you get in datavailableEvent.data is only a part of a whole file, that is being generated.现在,您在datavailableEvent.data得到的只是正在生成的整个文件的一部分。
The first one will generally contain the metadata , and some other data, depending on when the event has been told to fire, but the next parts won't necessarily contain any metadata.第一个通常包含metadata和一些其他数据,具体取决于事件被告知何时触发,但接下来的部分不一定包含任何元数据。

So you can't just grab these parts as standalone files, because the only file that is generated is the one that is made of all these parts, joined together in a single Blob.因此,您不能仅将这些部分作为独立文件获取,因为生成的唯一文件是由所有这些部分组成的文件,它们连接在一起成为一个 Blob。


So, to your problem, you have different possible approaches:因此,对于您的问题,您有不同的可能方法:

  • You could send to your server the latest slices you got from your recorder in an interval, and merge these server-side.您可以每隔一段时间将您从记录器中获得的最新切片发送到您的服务器,并在服务器端合并这些切片。

     const recorder = new MediaRecorder(stream); const chunks = []; recorder.ondataavailable = e => chunks.push(e.data); recorder.start(); // you don't need the timeslice argument setInterval(()=>{ // here we both empty the 'chunks' array, and send its content to the server sendToServer(new Blob(chunks.splice(0,chunks.length))) }, 5000);

    And on your server-side, you would append the newly sent data to the being recorded file.在您的服务器端,您会将新发送的数据附加到正在录制的文件中。

  • An other way would be to generate a lot of small standalone files, and to do this, you could simply generate a new MediaRecorder in an interval:另一种方法是生成许多小的独立文件,为此,您可以简单地在一个时间间隔内生成一个新的 MediaRecorder:

     function record_and_send(stream) { const recorder = new MediaRecorder(stream); const chunks = []; recorder.ondataavailable = e => chunks.push(e.data); recorder.onstop = e => sendToServer(new Blob(chunks)); setTimeout(()=> recorder.stop(), 5000); // we'll have a 5s media file recorder.start(); } // generate a new file every 5s setInterval(record_and_send, 5000);

    Doing so, each file will be standalone, with a duration of approximately 5 seconds, and you will be able to play these files one by one.这样做,每个文件将是独立的,持续时间约为 5 秒,您将能够一个个播放这些文件。
    Now if you wish to only store a single file on server, still using this method, you can very well merge these files together on server-side too, using eg a tool like ffmpeg .现在,如果您只想在服务器上存储单个文件,仍然使用这种方法,您也可以在服务器端很好地将这些文件合并在一起,例如使用ffmpeg 之类的工具。

Using a version of one of the @Kalido's suggestions I got this working.使用@Kalido 的建议之一的版本,我得到了这个工作。 It will send small standalone files to the server that it won't produce any glitch on image or sound when they are concatenated as an unified file on the server side:它将向服务器发送小的独立文件,当它们在服务器端连接为一个统一文件时,它不会在图像或声音上产生任何故障:

var mediaRecorder;
var recordingRunning = false;

// call this function to start the process
function startRecording(stream) {
  mediaRecorder = new MediaRecorder(stream);
  recordingRunning = true;
  recordVideoChunk(stream);
};

// call this function to stop the process
function stopRecording(stream) {
  recordingRunning = false
  mediaRecorder.stop();
};

function recordVideoChunk(stream) {
  let chunks = [];

  mediaRecorder.ondataavailable = function (e) {
    chunks.push(e.data);
  };

  mediaRecorder.onstop = function () {
    actualChunks = chunks.splice(0, chunks.length);
    const blob = new Blob(actualChunks, { type: "video/webm;codecs=vp9" });
    uploadVideoPart(blob); // Upload to server
  };

  mediaRecorder.start();

  setTimeout(function() {
    if(mediaRecorder.state == "recording")
      mediaRecorder.stop();

    if(recordingRunning)
      recordVideoChunk(stream);
  }, 10000); // 10 seconds videos
}

Latter on the server I concatenate them with this command:稍后在服务器上,我使用以下命令将它们连接起来:

# list.txt
file 'chunk1'
file 'chunk2'
file 'chunk3'

# command
ffmpeg -avoid_negative_ts 1 -f concat -safe 0 -i list.txt -c copy output.mp4

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 录制视频和音频并上传到服务器 - Record video and audio and upload to the server 使用 MediaRecorder 在重叠的时移帧中录制音频 - record audio using MediaRecorder in overlapping time-shifted frames 使用HTML5 / Flash录制和上传(到服务器)音频 - Record and upload (to server) audio with HTML5/Flash 将使用 mediaRecorder 制作的视频上传到服务器 - Upload a video made with mediaRecorder to the server 每次录制时,MediaRecorder都会为最后一个音频添加新音频 - MediaRecorder attaches last audio with new audio each time I record 如何从浏览器录制音频并上传到 django 服务器? - How to record audio from browser and upload to django server? 只有在 MediaRecorder 也启用了视频的情况下,这是否可以录制音轨? - Is this possible to record the audio tracks only if video is also enabled with MediaRecorder? 将音频和麦克风 stream 同时录制为不同的 MediaRecorder 轨道 - Record audio and microphone stream as different MediaRecorder tracks at the same time Web MediaRecorder API 无法同时录制音频和视频 - Web MediaRecorder API cannot record audio and video simultaneously iPhone 14 不会使用 MediaRecorder 进行录制 - iPhone 14 won't record using MediaRecorder
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM