简体   繁体   English

如何同时将 AVAssetReader 和 AVAssetWriter 用于多个轨道(音频和视频)?

[英]How to use AVAssetReader and AVAssetWriter for multiple tracks (audio and video) simultaneously?

I know how to use AVAssetReader and AVAssetWriter , and have successfully used them to grab a video track from one movie and transcode it into another.我知道如何使用AVAssetReaderAVAssetWriter ,并成功地使用它们从一部电影中抓取视频轨道并将其转码为另一部电影。 However, I'd like to do this with audio as well.但是,我也想用音频来做到这一点。 Do I have to create and AVAssetExportSession after I've done with the initial transcode, or is there some way to switch between tracks while in the midst of a writing session?在完成初始转码后,我是否必须创建和AVAssetExportSession ,或者是否有某种方法可以在编写会话期间在曲目之间切换? I'd hate to have to deal with the overhead of an AVAssetExportSession .我不想处理AVAssetExportSession的开销。

I ask because, using the pull style method - while ([assetWriterInput isReadyForMoreMediaData]) {...} - assumes one track only.我问是因为,使用拉式方法 - while ([assetWriterInput isReadyForMoreMediaData]) {...} - 仅假设一首曲目。 How could it be used for more than one track, ie both an audio and a video track?它如何用于多个轨道,即音频和视频轨道?

AVAssetWriter will automatically interleave requests on its associated AVAssetWriterInput s in order to integrate different tracks into the output file. AVAssetWriter将自动在其关联的AVAssetWriterInput上交错请求,以便将不同的轨道集成到输出文件中。 Just add an AVAssetWriterInput for each of the tracks that you have, and then call requestMediaDataWhenReadyOnQueue:usingBlock: on each of your AVAssetWriterInput s.只需为您拥有的每个轨道添加一个AVAssetWriterInput ,然后在您的每个AVAssetWriterInput上调用requestMediaDataWhenReadyOnQueue:usingBlock:

Here's a method I have that calls requestMediaDataWhenReadyOnQueue:usingBlock: .这是我调用requestMediaDataWhenReadyOnQueue:usingBlock: I call this method from a loop over the number of output/input pairs I have.我从我拥有的输出/输入对数量的循环中调用此方法。 (A separate method is good both for code readability and also because, unlike a loop, each call sets up a separate stack frame for the block.) (单独的方法有利于代码可读性,还因为与循环不同,每次调用都会为块设置单独的堆栈帧。)

You only need one dispatch_queue_t and can reuse it for all of the tracks.您只需要一个dispatch_queue_t并且可以将其重复用于所有轨道。 Note that you definitely should not call dispatch_async from your block, because requestMediaDataWhenReadyOnQueue:usingBlock: expects the block to, well, block until it has filled in as much data as the AVAssetWriterInput will take.请注意,您绝对应该叫dispatch_async从块,因为requestMediaDataWhenReadyOnQueue:usingBlock:预计该块,那么,块,直到它填补了尽可能多的数据的AVAssetWriterInput需要。 You don't want to return before then.你不想在那之前回来。

- (void)requestMediaDataForTrack:(int)i {
  AVAssetReaderOutput *output = [[_reader outputs] objectAtIndex:i];
  AVAssetWriterInput *input = [[_writer inputs] objectAtIndex:i];

  [input requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
    ^{
      [self retain];
      while ([input isReadyForMoreMediaData]) {
        CMSampleBufferRef sampleBuffer;
        if ([_reader status] == AVAssetReaderStatusReading &&
            (sampleBuffer = [output copyNextSampleBuffer])) {

          BOOL result = [input appendSampleBuffer:sampleBuffer];
          CFRelease(sampleBuffer);

          if (!result) {
            [_reader cancelReading];
            break;
          }
        } else {
          [input markAsFinished];

          switch ([_reader status]) {
            case AVAssetReaderStatusReading:
              // the reader has more for other tracks, even if this one is done
              break;

            case AVAssetReaderStatusCompleted:
              // your method for when the conversion is done
              // should call finishWriting on the writer
              [self readingCompleted];
              break;

            case AVAssetReaderStatusCancelled:
              [_writer cancelWriting];
              [_delegate converterDidCancel:self];
              break;

            case AVAssetReaderStatusFailed:
              [_writer cancelWriting];
              break;
          }

          break;
        }
      }
    }
  ];
}

Have you tried using two AVAssetWriterInputs and pushing the samples through a worker queue?您是否尝试过使用两个 AVAssetWriterInputs 并通过工作队列推送样本? Here is a rough sketch.这是一个粗略的草图。

processing_queue = dispatch_queue_create("com.mydomain.gcdqueue.mediaprocessor", NULL);

[videoAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    dispatch_asyc(processing_queue, ^{process video});
}];

[audioAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    dispatch_asyc(processing_queue, ^{process audio});
}];

You can use dispatch groups!您可以使用调度组!

Check out the AVReaderWriter example for MacOSX...查看 MacOSX 的 AVReaderWriter 示例...

I am quoting directly from the sample RWDocument.m:我直接从示例 RWDocument.m 中引用:

- (BOOL)startReadingAndWritingReturningError:(NSError **)outError
{
    BOOL success = YES;
    NSError *localError = nil;

    // Instruct the asset reader and asset writer to get ready to do work
    success = [assetReader startReading];
    if (!success)
        localError = [assetReader error];
    if (success)
    {
        success = [assetWriter startWriting];
        if (!success)
            localError = [assetWriter error];
    }

    if (success)
    {
        dispatch_group_t dispatchGroup = dispatch_group_create();

        // Start a sample-writing session
        [assetWriter startSessionAtSourceTime:[self timeRange].start];

        // Start reading and writing samples
        if (audioSampleBufferChannel)
        {
            // Only set audio delegate for audio-only assets, else let the video channel drive progress
            id <RWSampleBufferChannelDelegate> delegate = nil;
            if (!videoSampleBufferChannel)
                delegate = self;

            dispatch_group_enter(dispatchGroup);
            [audioSampleBufferChannel startWithDelegate:delegate completionHandler:^{
                dispatch_group_leave(dispatchGroup);
            }];
        }
        if (videoSampleBufferChannel)
        {
            dispatch_group_enter(dispatchGroup);
            [videoSampleBufferChannel startWithDelegate:self completionHandler:^{
                dispatch_group_leave(dispatchGroup);
            }];
        }

        // Set up a callback for when the sample writing is finished
        dispatch_group_notify(dispatchGroup, serializationQueue, ^{
            BOOL finalSuccess = YES;
            NSError *finalError = nil;

            if (cancelled)
            {
                [assetReader cancelReading];
                [assetWriter cancelWriting];
            }
            else
            {
                if ([assetReader status] == AVAssetReaderStatusFailed)
                {
                    finalSuccess = NO;
                    finalError = [assetReader error];
                }

                if (finalSuccess)
                {
                    finalSuccess = [assetWriter finishWriting];
                    if (!finalSuccess)
                        finalError = [assetWriter error];
                }
            }

            [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
        });

        dispatch_release(dispatchGroup);
    }

    if (outError)
        *outError = localError;

    return success;
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM