简体   繁体   English

C ++ / C FFmpeg工件在视频帧之间累积

[英]C++/C FFmpeg artifact build up across video frames

Context: 语境:
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. 我正在构建一个记录器,用于在Ubuntu 16.04上使用FFmpeg 2.8.6在单独的线程(使用Boost线程组)中捕获视频和音频。 I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html 我在这里遵循了demuxing_decoding的示例: https ://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html

Video capture specifics: 视频拍摄细节:
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. 我正在从Logitech C920网络摄像头读取H264,并将视频写入原始文件。 The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. 我在视频中注意到的问题是,在重置特定帧之前,似乎在各个帧之间积累了伪像。 Here is my frame grabbing, and decoding functions: 这是我的抓帧和解码功能:

// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function<int(AVPacket*, int*, int)> PacketDecoder;

/**
 * Decodes a video packet.
 * If the decoding operation is successful, returns the number of bytes decoded,
 * else returns the result of the decoding process from ffmpeg
 */
int decode_video_packet(AVPacket *packet,
                        int *got_frame,
                        int cached){
    int ret = 0;
    int decoded = packet->size;

    *got_frame = 0;

    //Decode video frame
    ret = avcodec_decode_video2(video_decode_context,
                                video_frame, got_frame, packet);
    if (ret < 0) {
        //FFmpeg users should use av_err2str
        char errbuf[128];
        av_strerror(ret, errbuf, sizeof(errbuf));
        std::cerr << "Error decoding video frame " << errbuf << std::endl;
        decoded = ret;
    } else {
        if (*got_frame) {
            video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);

            //Write to log file
            AVRational *time_base = &video_decode_context->time_base;
            log_frame(video_frame, time_base,
                      video_frame->coded_picture_number, video_log_stream);

#if( DEBUG )
            std::cout << "Video frame " << ( cached ? "(cached)" : "" )
                      << " coded:" <<  video_frame->coded_picture_number
                      << " pts:" << pts << std::endl;
#endif

            /*Copy decoded frame to destination buffer:
             *This is required since rawvideo expects non aligned data*/
            av_image_copy(video_dest_attr.video_destination_data,
                          video_dest_attr.video_destination_linesize,
                          (const uint8_t **)(video_frame->data),
                          video_frame->linesize,
                          video_decode_context->pix_fmt,
                          video_decode_context->width,
                          video_decode_context->height);

            //Write to rawvideo file
            fwrite(video_dest_attr.video_destination_data[0],
                   1,
                   video_dest_attr.video_destination_bufsize,
                   video_out_file);

            //Unref the refcounted frame
            av_frame_unref(video_frame);
        }
    }

    return decoded;
}

/**
 * Grabs frames in a loop and decodes them using the specified decoding function
 */
int process_frames(AVFormatContext *context,
                   PacketDecoder packet_decoder) {
    int ret = 0;
    int got_frame;
    AVPacket packet;

    //Initialize packet, set data to NULL, let the demuxer fill it
    av_init_packet(&packet);
    packet.data = NULL;
    packet.size = 0;

    // read frames from the file
    for (;;) {
        ret = av_read_frame(context, &packet);
        if (ret < 0) {
            if  (ret == AVERROR(EAGAIN)) {
                continue;
            } else {
                break;
            }
        }

        //Convert timing fields to the decoder timebase
        unsigned int stream_index = packet.stream_index;
        av_packet_rescale_ts(&packet,
                             context->streams[stream_index]->time_base,
                             context->streams[stream_index]->codec->time_base);

        AVPacket orig_packet = packet;
        do {
            ret = packet_decoder(&packet, &got_frame, 0);
            if (ret < 0) {
                break;
            }
            packet.data += ret;
            packet.size -= ret;
        } while (packet.size > 0);
        av_free_packet(&orig_packet);

        if(stop_recording == true) {
            break;
        }
    }

    //Flush cached frames
    std::cout << "Flushing frames" << std::endl;
    packet.data = NULL;
    packet.size = 0;
    do {
        packet_decoder(&packet, &got_frame, 1);
    } while (got_frame);

    av_log(0, AV_LOG_INFO, "Done processing frames\n");
    return ret;
}


Questions: 问题:

  1. How do I go about debugging the underlying issue? 我该如何调试潜在问题?
  2. Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem? 是否有可能在打开解码上下文的线程之外的线程中运行解码代码导致了问题?
  3. Am I doing something wrong in the decoding code? 我在解码代码中做错了吗?

Things I have tried/found: 我尝试过/发现的事情:

  1. I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes (I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have) However, the answer to the question is posted by the OP without specific details about how the issue was fixed. 我在这里发现了与该问题相同的线程: 关键帧之间的FFMPEG解码伪像 (由于隐私问题,我无法发布损坏的帧的样本,但是该问题中链接到的图像描述了我遇到的相同问题)。但是,答案该问题由OP发布,而没有有关如何解决该问题的具体细节。 The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. OP仅提到他没有“正确地保存数据包”,但没有任何关于错误或如何修复的信息。 I do not have enough reputation to post a comment seeking clarification. 我没有足够的声誉来发表评论以寻求澄清。

  2. I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly. 我最初是将数据包按值传递给解码函数,但切换到通过指针传递的可能性不大,这是因为数据包的释放不正确。

  3. I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged? 我发现了有关调试解码问题的另一个问题,但找不到任何结论: 如何调试视频解码损坏?

I'd appreciate any insight. 我将不胜感激。 Thanks a lot! 非常感谢!

[EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment: [编辑]为了回应罗纳德(Ronald)的回答,我在评论中添加了一些其他信息:

  1. I am only calling decode_video_packet() from the thread processing video frames; 我只从处理视频帧的线程中调用decode_video_packet()。 the other thread processing audio frames calls a similar decode_audio_packet() function. 其他处理音频帧的线程调用类似的decode_audio_packet()函数。 So only one thread calls the function. 因此,只有一个线程调用该函数。 I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames. 我应该提到的是,我已将解码上下文中的thread_count设置为1,如果失败,则在刷新缓存的帧时会在malloc.c中出现段错误。

  2. I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. 如果process_frames和帧解码器函数在单独的线程上运行,我可以看到这是一个问题,事实并非如此。 Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? 是否有特定原因为何释放是在函数内完成还是在返回后执行? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet. 我相信释放功能将传递原始数据包的副本,因为如果解码器无法解码整个音频数据包,则音频数据包将需要多个解码调用。

  3. A general problem is that the corruption does not occur all the time. 一个普遍的问题是损坏不会一直发生。 I can debug better if it is deterministic. 如果是确定性的,我可以进行更好的调试。 Otherwise, I can't even say if a solution works or not. 否则,我什至不能说解决方案是否有效。

A few things to check: 需要检查的几件事:

  • are you running multiple threads that are calling decode_video_packet() ? 您是否正在运行多个正在调用decode_video_packet()线程? If you are: don't do that! 如果您是:不要这样做! FFmpeg has built-in support for multi-threaded decoding, and you should let FFmpeg do threading internally and transparently. FFmpeg具有对多线程解码的内置支持,您应该让FFmpeg在内部和透明地进行线程化。
  • you are calling av_free_packet() right after calling the frame decoder function, but at that point it may not yet have had a chance to copy the contents. 您在调用帧解码器函数后立即调用av_free_packet() ,但此时它可能还没有机会复制内容。 You should probably let decode_video_packet() free the packet instead, after calling avcodec_decode_video2() . 在调用avcodec_decode_video2()之后,您可能应该让decode_video_packet()释放数据包。

General debugging advice: 一般调试建议:

  • run it without any threading and see if that works; 在没有任何线程的情况下运行它,看看是否可行;
  • if it does, and with threading it fails, use thread debuggers such as tsan or helgrind to help in finding race conditions that point to your code. 如果是这样,并且线程失败,则使用线程调试器(例如tsan或helgrind)来帮助查找指向您代码的竞争条件。
  • it can also help to know whether the output you're getting is reproduceable (this suggests a non-threading-related bug in your code) or changes from one run to the other (this suggests a race condition in your code). 它还可以帮助您了解所获得的输出是可重现的(这表明您的代码中存在与线程无关的错误),还是从一次运行更改为另一次运行(这表明您的代码存在竞争状况)。

And yes, the periodic clean-ups are because of keyframes. 是的,定期清理是由于关键帧。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM