简体   繁体   English

C ++-在ffmpeg中应用过滤器

[英]C++ - applying filter in ffmpeg

I'm trying to deinterlace a frame using ffmpeg (latest release). 我正在尝试使用ffmpeg(最新版本)对帧进行去隔行处理。 Related with this question, I can get the filter I want using this sentence: 问题相关,我可以使用以下语句获取过滤器:

AVFilter *filter = avfilter_get_by_name("yadif");

After that, I open the filter context as: 之后,我将筛选器上下文打开为:

AVFilterContext *filter_ctx;
avfilter_open(&filter_ctx, filter, NULL);

My first question is about this function. 我的第一个问题是关于此功能的。 Visual Studio warns me about avfilter_open is deprecated. Visual Studio警告我关于avfilter_open已过时。 Which is the alternative? 哪种选择?

After that, I do: 在那之后,我做:

avfilter_init_str(filter_ctx, "yadif=1:-1");

And always fails. 并总是失败。 I've tried " 1:-1 " instead " yadif=1:-1 ", but always fails too, what parameter I should use? 我尝试使用“ 1:-1 ”而不是“ yadif=1:-1 ”,但也总是失败, 我应该使用什么参数?

EDIT: A value of " 1 " or " 2 ", for example, it works. 编辑:例如,值“ 1 ”或“ 2 ”,它可以工作。 Debuging it, I found that with one of this values, the function uses mode=1 or mode=2 . 对其进行调试,发现使用其中一个值,该函数使用mode=1mode=2 The explanation of those values is in this link . 这些值的解释在此链接中

Then, I have a AVFrame *frame that is the frame I want to deinterlace. 然后,我有一个AVFrame *frame ,它是我要去隔行的帧。 When the last sentence work, I'll have the filter and his context init. 当最后一句话起作用时,我将获得过滤器和他的上下文初始化。 How do I apply this filter to my frame? 如何将此滤镜应用于框架?

Thanks for your help. 谢谢你的帮助。

I undertsnad your question is over a year now but recently I had to work with interlaced DVB-TS streams so I might be able to help anybody else coming across this subject. 我想知道您的问题已经过去一年了,但是最近我不得不处理隔行扫描的DVB-TS流,这样我就可以帮助遇到此问题的其他任何人。

These snippets are from a finished player I've written 这些摘录来自我写的成品玩家

Initialise the filter graph: 初始化过滤器图:

void VideoManager::init_filter_graph(AVFrame *frame) {
    if (filter_initialised) return;

    int result;

    AVFilter *buffer_src   = avfilter_get_by_name("buffer");
    AVFilter *buffer_sink  = avfilter_get_by_name("buffersink");
    AVFilterInOut *inputs  = avfilter_inout_alloc();
    AVFilterInOut *outputs = avfilter_inout_alloc();

    AVCodecContext *ctx = ffmpeg.vid_stream.context;
    char args[512];

    int frame_fix = 0; // fix bad width on some streams
    if (frame->width < 704) frame_fix = 2;
    else if (frame->width > 704) frame_fix = -16;

    snprintf(args, sizeof(args),
         "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
         frame->width + frame_fix,
         frame->height,
         frame->format,// ctx->pix_fmt,
         ctx->time_base.num,
         ctx->time_base.den,
         ctx->sample_aspect_ratio.num,
         ctx->sample_aspect_ratio.den);

    const char *description = "yadif=1:-1:0";

    LOGD("Filter: %s - Settings: %s", description, args);

    filter_graph = avfilter_graph_alloc();
    result = avfilter_graph_create_filter(&filter_src_ctx, buffer_src, "in", args, NULL, filter_graph);
    if (result < 0) {
        LOGI("Filter graph - Unable to create buffer source");
        return;
    }

    AVBufferSinkParams *params = av_buffersink_params_alloc();
    enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };

    params->pixel_fmts = pix_fmts;
    result = avfilter_graph_create_filter(&filter_sink_ctx, buffer_sink, "out", NULL, params, filter_graph);
    av_free(params);
    if (result < 0) {
        LOGI("Filter graph - Unable to create buffer sink");
        return;
    }

    inputs->name        = av_strdup("out");
    inputs->filter_ctx  = filter_sink_ctx;
    inputs->pad_idx     = 0;
    inputs->next        = NULL;

    outputs->name       = av_strdup("in");
    outputs->filter_ctx = filter_src_ctx;
    outputs->pad_idx    = 0;
    outputs->next       = NULL;

    result = avfilter_graph_parse_ptr(filter_graph, description, &inputs, &outputs, NULL);
    if (result < 0) LOGI("avfilter_graph_parse_ptr ERROR");

    result = avfilter_graph_config(filter_graph, NULL);
    if (result < 0) LOGI("avfilter_graph_config ERROR");

    filter_initialised = true;
}

When processing the video packets from the stream, check if it is an interlaced frame and send the frame off to the filter. 在处理来自流的视频数据包时,请检查它是否为隔行帧并将该帧发送到过滤器。 The filter will then return the de-interlaced frames back to you. 然后,滤镜会将反交错的帧返回给您。

void FFMPEG::process_video_packet(AVPacket *pkt) {
    int got;
    AVFrame *frame = vid_stream.frame;
    avcodec_decode_video2(vid_stream.context, frame, &got, pkt);

    if (got) {
        if (!frame->interlaced_frame) {     // not interlaced
            Video.add_av_frame(frame, 0);
        } else {
            if (!Video.filter_initialised) {
                Video.init_filter_graph(frame);
            }

            av_buffersrc_add_frame_flags(Video.filter_src_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF);
            int c = 0;

            while (true) {
                AVFrame *filter_frame = ffmpeg.vid_stream.filter_frame;

                int result = av_buffersink_get_frame(Video.filter_sink_ctx, filter_frame);

                if (result == AVERROR(EAGAIN) || result == AVERROR(AVERROR_EOF)) break;
                if (result < 0) return;

                Video.add_av_frame(filter_frame, c++);
                av_frame_unref(filter_frame);
            }
        }
    }
}

Hope this helps anyone because finding information about ffmpeg is tough going. 希望这对任何人都有帮助,因为查找有关ffmpeg的信息非常困难。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM