简体   繁体   English

FFMPEG创建用于将原始帧添加到AVI文件的内部管道(无输入文件)

[英]FFMPEG Create internal pipeline for adding raw frames to AVI file (no input file)

I have an application that reads in a raw video file, does some image processing to each frame, then feeds the resulting BGRA-format byte[] frames to the FFMPEG container to eventually create an AVI file. 我有一个应用程序,该应用程序读取原始视频文件,对每个帧进行一些图像处理,然后将生成的BGRA格式的byte []帧提供给FFMPEG容器,最终创建一个AVI文件。 Since this process works slightly differently than any other FFMPEG example I've seen in that it does not have an existing input file, I'm wondering if anyone knows how to do this. 由于此过程与我所看到的任何其他FFMPEG示例略有不同,因为它没有现有的输入文件,所以我想知道是否有人知道如何执行此操作。

I initialize the FFMPEG container: 我初始化FFMPEG容器:

ProcessBuilder pBuilder = new ProcessBuilder(raid.getLocation()
                + "\\ffmpeg\\bin\\ffmpeg.exe", "-r", "30", "-vcodec",
                "rawvideo", "-f", "rawvideo", "-pix_fmt", "bgra", "-s",
                size, "-i", "pipe:0", "-r", "30", "-y", "-c:v", "libx264",
                "C:\export\2015-02-03\1500\EXPORT6.avi");

 try
 {
     process = pBuilder.start();
 }
 catch (IOException e)
 {
     e.printStackTrace();
 }

  ffmpegInput = process.getOutputStream();

For each incoming byte[] array frame, I add the frame to the container ("src" is a BufferedImage that I'm converting to a byte array): 对于每个传入的byte []数组帧,我将其添加到容器中(“ src”是我要转换为字节数组的BufferedImage):

try
{
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     ImageIO.write(src, ".png", baos);
     ffmpegInput.write(baos.toByteArray());
}
catch (IOException e)
{
     e.printStackTrace();
}

And once the video is finished loading frames, I close the container: 视频加载完帧后,我关闭容器:

try
{
     ffmpegInput.flush();
     ffmpegInput.close();
}
catch (IOException e)
{
     e.printStackTrace();
}

The AVI file is created but it displays an error when opening. AVI文件已创建,但在打开时显示错误。 The FFMPEG logger displays this as the error: FFMPEG记录器将其显示为错误:

ffmpeg version N-71102-g1f5d1ee Copyright (c) 2000-2015 the FFmpeg developers built with gcc 4.9.2 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
  libavutil      54. 20.101 / 54. 20.101
  libavcodec     56. 30.100 / 56. 30.100
  libavformat    56. 26.101 / 56. 26.101
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 13.101 /  5. 13.101
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  1.100 /  1.  1.100
  libpostproc    53.  3.100 / 53.  3.100
Input #0, rawvideo, from 'pipe:0':
  Duration: N/A, bitrate: 294912 kb/s
    Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 640x480, 294912 kb/s, 30 tbr, 30 tbn, 30 tbc
No pixel format specified, yuv444p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 @ 00000000003bcbe0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 @ 00000000003bcbe0] profile High 4:4:4 Predictive, level 3.0, 4:4:4 8-bit
Output #0, avi, to 'C:\export\2015-02-03\1500\EXPORT6.avi':
  Metadata:
    ISFT            : Lavf56.26.101
    Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuv444p, 640x480, q=-1--1, 30 fps, 30 tbn, 30 tbc
    Metadata:
      encoder         : Lavc56.30.100 libx264
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
frame=    0 fps=0.0 q=0.0 Lsize=       6kB time=00:00:00.00 bitrate=N/A    
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)

Any insight or ideas would be greatly appreciated! 任何见解或想法将不胜感激!

Replace ProcessBuilder content with this: 用以下内容替换ProcessBuilder内容:

new ProcessBuilder(PATHOFYOURFFMPEG, "-r", "30", "-s", SIZEOFYOURFRAME,
"-vcodec", "rawvideo", "-f", "rawvideo", "-pix_fmt", "bgra", "-i",
"pipe:0", "-r", "30", PATHANDFILEEXTENTION);

Basically, this is saying the input frames are coming in at 30 fps, its codec and format are rawvideo and the pixel format is " bgra " (Spelling is important here. I misspelled this and took me a work day figure out). 基本上,这就是说输入帧以30 fps的速度输入,其编解码器和格式为rawvideo,像素格式为“ bgra ”(此处的拼写很重要。我拼错了这一点,并花了我一个工作日bgra弄清楚)。 The input is pipe:0 . 输入是pipe:0 (eg 0 for stdin, 1 for stdout, 2 for stderr). (例如,对于stdin为0,对于stdout为1,对于stderr为2)。 The confusing part is on the Java side, you have to get its " OutputStream " from the process and pass your frame data ( byte [] ) as stdin for ffmpeg. 令人困惑的部分是在Java方面,您必须从进程中获取其“ OutputStream ”并将帧数据( byte [] )作为ffmpeg的stdin传递。 very confused uh? 很困惑吗?

Finally, I redefined -r and "30" again. 最后,我再次重新定义了-r和“ 30”。 This is not a mistake. 这不是一个错误。 So basically things before -i is for input and things defined after -i is for output! 因此,基本上-i之前的内容用于输入,而-i之后定义的内容用于输出! So in here I'm just saying that the output video will be 30 fps as well. 所以在这里我只是说输出视频也将是30 fps。 Hope this helps!!! 希望这可以帮助!!!

PS: I tried to pass the byte[] to an queue and array list to cache. PS:我试图将byte[]传递到队列和数组列表以进行缓存。 For some reasons, Java caches only 1 copy over and over. 由于某些原因,Java一遍又一遍只缓存1个副本。 By the time I figure this out, half day has passed. 在我弄清楚这一点时,已经过去了半天。 So check you frame data before passing it to the ffmpeg. 因此,在将数据传递到ffmpeg之前,请先对其进行框架化处理。

我相信将[ ffmpeg_folder ] /bin添加到%PATH%然后调用

ffmpeg .

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM