简体   繁体   English

将原始视频编码为 h264 无法播放

[英]Encoding raw video to h264 is not playable

What I'm trying to achieve is, no matter of the camera or rtsp stream encoding to decode it and then encode it to H264 and save it in mp4 container.我想要实现的是,无论是使用相机还是 rtsp 流编码对其进行解码,然后将其编码为 H264 并将其保存在mp4容器中。 The problem is that the video is not playable while there is no errors thrown .问题是视频无法播放而没有抛出错误 I can see the file growing, but then nothing.我可以看到文件在增长,但随后什么也没有。 What am I missing?我错过了什么?

AVFormatContext* pInputFmtCtx = avformat_alloc_context();
AVInputFormat* inputFormat = av_find_input_format(Format); // format = dshow
avformat_open_input(&pInputFmtCtx, StreamUrl, inputFormat, &options); // StreamUrl is = "video=Logitech Cam" or similiar

...... find stream info and video index

// find decoder, for that particular camera it is RAW_VIDEO
AVCodecParameters* videoCodecParams = pInputFmtCtx->streams[_vidStreamIndex]->codecpar;
AVCodec* videoDecoder = avcodec_find_decoder(videoCodecParams->codec_id);

//init and open VIDEO codec context
pVideoCodecContext = avcodec_alloc_context3(videoDecoder);
avcodec_parameters_to_context(pVideoCodecContext, videoCodecParams);
avcodec_open2(pVideoCodecContext, videoDecoder, null)

// now output format
AVFormatContext* pOutputFmtCtx = null;
avformat_alloc_output_context2(&pOutputFmtCtx, null, null, fileName); // filename is always .mp4

// iterate over pInputFmtCtx->nb_streams
// create new stream and H264 encoder
AVStream* out_stream = avformat_new_stream(pOutputFmtCtx, null);

// init video encoder
AVCodec* videoEncoder = avcodec_find_encoder_by_name("libx264");
pVideoEncodeCodecContext = ffmpeg.avcodec_alloc_context3(videoEncoder);

pVideoEncodeCodecContext->width = pVideoCodecContext->width;
pVideoEncodeCodecContext->height = pVideoCodecContext->height;
pVideoEncodeCodecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;
pVideoEncodeCodecContext->bit_rate = 2 * 1000 * 1000;
pVideoEncodeCodecContext->rc_buffer_size = 4 * 1000 * 1000;
pVideoEncodeCodecContext->rc_max_rate = 2 * 1000 * 1000;
pVideoEncodeCodecContext->rc_min_rate = 3 * 1000 * 1000;
pVideoEncodeCodecContext->framerate = framerate;
pVideoEncodeCodecContext->max_b_frames = 0;
pVideoEncodeCodecContext->time_base = av_inv_q(framerate);

av_opt_set(pVideoEncodeCodecContext->priv_data, "preset", "slow", 0);
av_opt_set(pVideoEncodeCodecContext->priv_data, "tune", "zerolatency", 0);
av_opt_set(pVideoEncodeCodecContext->priv_data, "vprofile", "baseline", 0);

// and open it and copy params 
avcodec_open2(pVideoEncodeCodecContext, videoEncoder, null);
avcodec_parameters_from_context(out_stream->codecpar, pVideoEncodeCodecContext)

// open file and write header
avio_open(&pOutputFmtCtx->pb, fileName, AVIO_FLAG_WRITE);
avformat_write_header(pOutputFormatContext, null);

// now reading 
AVPacket* pkt = ffmpeg.av_packet_alloc();
AVFrame* frame = ffmpeg.av_frame_alloc();
AVPacket* out_pkt = ffmpeg.av_packet_alloc();
while (av_read_frame(pInputFmtCtx, pkt) >= 0) 
{
   avcodec_send_packet(pVideoCodecContext, pkt);
   avcodec_receive_frame(pVideoCodecContext, frame);

   // using sws_getContext and sws_scale the frame is converted to YUV_420P
   // which is fine, because I also have preview and I can see the frames fine
   var yuvFrame = _frameConverter.Convert(frame);

   yuvFrame->pts = frame_count++; 
   int ret = avcodec_send_frame(pVideoEncodeCodecContext, yuvFrame);
   while (ret >= 0)
   {
      ret = avcodec_receive_packet(pVideoEncodeCodecContext, out_pkt);

      if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
      {
          break;
      }

      int out_stream_index = _streamMapping[out_pkt->stream_index];
      AVStream* in_stream = pInputFormatContext->streams[out_pkt->stream_index];
      AVStream* out_stream = pOutputFormatContext->streams[out_stream_index];

      //rescale the input timestamps to output timestamps
      out_pkt->pts = av_rescale_q_rnd(out_pkt->pts, in_stream->time_base, pVideoEncodeCodecContext->time_base, AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
      out_pkt->dts = av_rescale_q_rnd(out_pkt->dts, in_stream->time_base, pVideoEncodeCodecContext->time_base, AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
      out_pkt->duration = ffmpeg.av_rescale_q(out_pkt->duration, in_stream->time_base, out_stream->time_base);
      out_pkt->stream_index = out_stream_index;
      out_pkt->pos = -1;

      ret = av_interleaved_write_frame(pOutputFormatContext, out_pkt);

      av_packet_unref(out_pkt);
  }
}

// later on
av_write_trailer(pOutputFormatContext);

EDIT: as suggested, I'm providing sample mp4 and log编辑:按照建议,我提供示例 mp4 和日志

It is fixed, what I have done is它是固定的,我所做的是

Added pVideoEncodeCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;添加pVideoEncodeCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; which fixed the avcC atom being not full.修复了avcC原子未满的问题。

And also replaced而且还换了

//rescale the input timestamps to output timestamps
  out_pkt->pts = av_rescale_q_rnd(out_pkt->pts, in_stream->time_base, pVideoEncodeCodecContext->time_base, AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
  out_pkt->dts = av_rescale_q_rnd(out_pkt->dts, in_stream->time_base, pVideoEncodeCodecContext->time_base, AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
  out_pkt->duration = ffmpeg.av_rescale_q(out_pkt->duration, in_stream->time_base, out_stream->time_base);
  out_pkt->stream_index = out_stream_index;
  out_pkt->pos = -1;

with (tooke it from ffmpeg source) with(从ffmpeg源中获取)

av_packet_rescale_ts(out_pkt, pVideoEncodeCodecContext->time_base, out_stream->time_base);

Now I have perfectly working .mp4 with correct timestamps.现在我可以完美地使用带有正确时间戳的.mp4

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM