简体   繁体   English

MPEG-DASH和碎片mp4

[英]MPEG-DASH and fragmented mp4

My understanding of fragmented mp4 is that it is a single file, but internally it is structured as fragments. 我对碎片mp4的理解是它是一个单独的文件,但在内部它被构造成片段。 Can someone explain to me how these fragments can be addressed in the .mpd file for DASH? 有人可以向我解释如何在DASH的.mpd文件中解决这些片段吗? The .mpd files that I've seen seem to address various segments with separate urls, but a fragmented mp4, I imagine, would have to be addressed by byte offsets into the same url. 我见过的.mpd文件似乎用不同的url来处理各个段,但我想,片段化的mp4必须通过字节偏移来解决到同一个url。 How does the browser then know what times correspond to what byte ranges? 然后浏览器如何知道哪些时间对应于哪个字节范围?

Here's a example mpd for MPEG DASH main profile. 这是MPEG DASH主配置文件的mpd示例 The mp4 file described by this mpd is a fragmented mp4. 这个mpd描述的mp4文件是一个碎片mp4。 As you see : 正如你看到的 :

<SegmentURL media="bunny_15s_200kbit/bunny_200kbit_dashNonSeg.mp4" mediaRange="868-347185"/>
<SegmentURL media="bunny_15s_200kbit/bunny_200kbit_dashNonSeg.mp4" mediaRange="347186-664464"/>

At <SegmentURL> element, the fragments can be addressed into the same url, and you can find byte offsets at @mediaRange attribute. <SegmentURL>元素中,片段可以被寻址到相同的URL中,您可以在@mediaRange属性中找到字节偏移量。

The .mpd file has a list of the segments with their byte ranges as shown above. .mpd文件有一个段列表,其字节范围如上所示。 To access the segments, you need to parse the mediarange attribute for each line and request it with something like XHR with setRequestHeader to specify the byte range. 要访问段,您需要解析每行的mediarange属性,并使用带有setRequestHeader的XHR请求它来指定字节范围。 With this method, there's no server component needed. 使用此方法,不需要服务器组件。 Here's some code I've been using: 这是我一直在使用的一些代码:

  var xhr = new XMLHttpRequest();

  // Range is in format of 1234-34567
  // url is the .mp4 file path 
  if (range || url) { // make sure we've got content in our params
    xhr.open('GET', url);
    xhr.setRequestHeader("Range", "bytes=" + range); 
    xhr.send();
    xhr.responseType = 'arraybuffer';
    try {
      // watch the ready state
      xhr.addEventListener("readystatechange", function () {
        if (xhr.readyState == 4) { //wait for video to load
          // add response to buffer
          try {   
            // videoSource is a sourceBuffer on your mediaSource object.             
            videoSource.appendBuffer(new Uint8Array(xhr.response));
            videoSource.onreadystatechange = function () {
              if (videoSource.readyState == videoSource.done) {
                videoElement.play();
              }
            };
          } catch (e) {
            //  fail quietly  
          }
        }
      }, false);

The server has a manifest that can be created by scanning the file for moof boxes. 服务器有一个清单,可以通过扫描文件来创建moof框。 A moof+mdat = one fragment. moof + mdat =一个片段。 When a request for a fragment is made, the file offset is looked up in the manifest and the correct boxes are returned. 当发出片段请求时,将在清单中查找文件偏移量并返回正确的框。

As far as I understand it... In the case of the DASH 'onDemand' profile, it is the job of the DASH packager to create the *.mpd (manifest) and specify which byte ranges map to a segment (could be a number of fragments). 据我所知...在DASH'onDemand'配置文件的情况下,DASH打包器的工作是创建* .mpd(清单)并指定哪个字节范围映射到一个段(可能是一个碎片数量)。 The client then loads the *.mpd and makes http byte range requests for the ranges in the manifest. 然后,客户端加载* .mpd并对清单中的范围发出http字节范围请求。 I think the DASH 'live' profile is more similar to smooth streaming in that each segment has a url. 我认为DASH'live'配置文件更像是流畅的流媒体,因为每个段都有一个url。

If you need to find out the position of the fragments within the mp4 container I believe this information is in the segment 'sidx' box. 如果你需要找到mp4容器中片段的位置,我相信这些信息是在片段'sidx'框中。

看来ffmpeg现在也直接支持HLS

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM