简体   繁体   English

AVFoundation 将第一帧添加到视频

[英]AVFoundation add first frame to video

I'm trying to control the way that videos produced by my app appear in the Photos app on iOS.我正在尝试控制我的应用程序制作的视频在 iOS 上的照片应用程序中的显示方式。 All videos that I produce start out with a black frame, then things fade in and out, etc. When these are saved to Photos, Apple takes the first frame (a black square) and uses it as a thumbnail in Photos.我制作的所有视频都以黑框开始,然后淡入淡出,等等。当这些视频保存到照片时,Apple 获取第一帧(黑色方块)并将其用作照片中的缩略图。 I'd like to change this so that I can set my own thumbnail for people to easily recognize the video.我想更改此设置,以便我可以设置自己的缩略图,以便人们轻松识别视频。

Since I can't find any built in API for this, I'm trying to hack it, by adding a thumbnail I generate as the first frame of the video.由于我找不到任何内置的 API,我试图通过添加我生成的缩略图作为视频的第一帧来破解它。 I'm trying to use AVFoundation for this, but having some issues.我正在尝试为此使用 AVFoundation,但遇到了一些问题。

My code throws the following error: [AVAssetReaderTrackOutput copyNextSampleBuffer] cannot copy next sample buffer before adding this output to an instance of AVAssetReader (using -addOutput:) and calling -startReading on that asset reader' , despite having called the method.我的代码抛出以下错误: [AVAssetReaderTrackOutput copyNextSampleBuffer] cannot copy next sample buffer before adding this output to an instance of AVAssetReader (using -addOutput:) and calling -startReading on that asset reader' ,尽管已调用该方法。

Here is my code:这是我的代码:

AVAsset *asset = [[AVURLAsset alloc] initWithURL:fileUrl options:nil];
UIImage *frame = [self generateThumbnail:asset];

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                               AVVideoCodecH264, AVVideoCodecKey,
                               [NSNumber numberWithInt:640], AVVideoWidthKey,
                               [NSNumber numberWithInt:360], AVVideoHeightKey,
                               nil];

AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:asset error:nil];
AVAssetReaderOutput *readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[asset.tracks firstObject]
                                                                               outputSettings:nil];
[assetReader addOutput:readerOutput];

AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:path
                                                       fileType:AVFileTypeMPEG4
                                                          error:nil];
NSParameterAssert(videoWriter);

AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                                     outputSettings:videoSettings];

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                 assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                 sourcePixelBufferAttributes:nil];

NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);

[videoWriter addInput:writerInput];

[assetReader startReading];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

CVPixelBufferRef buffer = [self pixelBufferFromCGImage:frame.CGImage andSize:frame.size];

BOOL append_ok = NO;
while (!append_ok) {
    if (adaptor.assetWriterInput.readyForMoreMediaData) {
        append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
        CVPixelBufferPoolRef bufferPool = adaptor.pixelBufferPool;
        NSParameterAssert(bufferPool != NULL);

        [NSThread sleepForTimeInterval:0.05];
    } else {
        [NSThread sleepForTimeInterval:0.1];
    }
}
CVBufferRelease(buffer);

dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
    CMSampleBufferRef nextBuffer;
    while (writerInput.readyForMoreMediaData) {
        nextBuffer = [readerOutput copyNextSampleBuffer];
        if(nextBuffer) {
            NSLog(@"Wrote: %zu bytes", CMSampleBufferGetTotalSampleSize(nextBuffer));
            [writerInput appendSampleBuffer:nextBuffer];
        } else {
            [writerInput markAsFinished];
            [videoWriter finishWritingWithCompletionHandler:^{
                //int res = videoWriter.status;
            }];
            break;
        }
    }
}];

I tried some variations on this, but all to no avail.我尝试了一些变化,但都无济于事。 I've seen some crashes due to file format too.由于文件格式,我也看到了一些崩溃。 I'm using an mp4 file (not sure how to find out its compression status or whether it is supported), but I haven't been able to make it work even with an uncompressed .mov file (made by using Photo Booth on Mac).我正在使用 mp4 文件(不确定如何找出其压缩状态或是否受支持),但即使使用未压缩的 .mov 文件(通过在 Mac 上使用 Photo Booth 制作)我也无法使其工作)。

Any ideas what I'm doing wrong?任何想法我做错了什么?

Just had the same problem. 刚遇到同样的问题。

Your assetReader gets released after the end of the function by ARC. ARC的函数结束后,您的assetReader将被释放。 But the block reading buffer from readerOutput continues trying to read the content. 但是readerOutput的块读取缓冲区继续尝试读取内容。

When assetReader is gone, readerOutput is disconnected from it, hence the error stating that you need to connect it back to an assetReader. 当assetReader消失时,readerOutput与它断开连接,因此错误表明您需要将其连接回assetReader。

The fix is to make sure that assetReader isn't released. 修复是为了确保不释放assetReader。 Eg by putting it within a property. 例如,将它放在一个属性中。

As an alternative:作为备选:

You can capture assetReader inside the requestMediaDataWhenReadyOnQueue block, for example by extracting the readerOutput from assetReader inside that block.您可以捕获assetReader内部requestMediaDataWhenReadyOnQueue通过提取readerOutput块,例如assetReader该块内。

Depending on your architecture it may be a cleaner solution to not have an additional property just for memory management.根据您的架构,没有仅用于内存管理的附加属性可能是一种更简洁的解决方案。

I implemented this approach in swift and it works fine.我迅速实施了这种方法,并且效果很好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM