简体   繁体   English

使用ffmpeg实时解码android的硬件编码的H264相机feed

[英]Decode android's hardware encoded H264 camera feed using ffmpeg in real time

I'm trying to use the hardware H264 encoder on Android to create video from the camera, and use FFmpeg to mux in audio (all on the Android phone itself) 我正在尝试使用Android上的硬件H264编码器从相机创建视频,并使用FFmpeg音频(全部在Android手机本身上)

What I've accomplished so far is packetizing the H264 video into rtsp packets, and decoding it using VLC (over UDP ), so I know the video is at least correctly formatted. 到目前为止,我已经完成的工作是将H264视频打包为rtsp数据包,并使用VLC(通过UDP )对其进行解码,因此我知道视频的格式至少正确。 However, I'm having trouble getting the video data to ffmpeg in a format it can understand. 但是,我无法以可以理解的格式将视频数据传输到ffmpeg

I've tried sending the same rtsp packets to a port 5006 on localhost (over UDP), then providing ffmpeg with the sdp file that tells it which local port the video stream is coming in on and how to decode the video, if I understand rtsp streaming correctly. 我试过将相同的rtsp数据包发送到本地主机上的端口5006(通过UDP),然后为ffmpeg提供sdp文件,该文件告诉它视频流进入哪个本地端口以及如何解码视频(如果我理解) rtsp正确流式传输。 However this doesn't work and I'm having trouble diagnosing why, as ffmpeg just sits there waiting for input. 但是,这是行不通的,由于ffmpeg只是坐在那里等待输入,因此我无法诊断原因。

For reasons of latency and scalability I can't just send the video and audio to the server and mux it there, it has to be done on the phone, in as lightweight a manner as possible. 出于延迟和可伸缩性的考虑,我不能仅将视频和音频发送到服务器并在服务器上进行复用,而必须在电话中以尽可能轻便的方式完成。

What I guess I'm looking for are suggestions as to how this can be accomplished. 我猜我正在寻找有关如何实现此目标的建议。 The optimal solution would be sending the packetized H264 video to ffmpeg over a pipe, but then I can't send ffmpeg the sdp file parameters it needs to decode the video. 最佳解决方案是通过管道将打包的H264视频发送到ffmpeg ,但随后我无法将解码视频所需的sdp文件参数发送给ffmpeg

I can provide more information on request, like how ffmpeg is compiled for Android but I doubt that's necessary. 我可以根据要求提供更多信息,例如ffmpeg如何为Android编译,但我怀疑这是否必要。

Oh, and the way I start ffmpeg is through command line, I would really rather avoid mucking about with jni if that's at all possible. 哦,我启动ffmpeg的方式是通过命令行,如果可能的话,我真的宁愿避免与jni混为一谈。

And help would be much appreciated, thanks. 非常感谢您的帮助,谢谢。

Have you tried using java.lang.Runtime? 您是否尝试过使用java.lang.Runtime?

String[] parameters = {"ffmpeg", "other", "args"};
Program program Runtime.getRuntime().exec(parameters);

InputStream in = program.getInputStream();
OutputStream out = program.getOutputStream();
InputStream err = program.getErrorStream();

Then you write to stdout and read from stdin and stderr. 然后,您写入stdout并从stdin和stderr中读取。 It's not a pipe but it should be better than using a network interface. 它不是管道,但应该比使用网络接口更好。

A little bit late but I think this is a good question and it doesn't have a good answer yet. 有点晚了,但是我认为这是一个很好的问题,而且还没有很好的答案。

If you want to stream the camera and mic from an android device you have two main alternatives: Java or NDK implementations. 如果要从Android设备流式传输摄像机和麦克风,则有两种主要选择:Java或NDK实现。

  1. Java implementation. Java实现。

    I'm only going to mention the idea but basically it is implement an RTSP Server and RTP Protocol in java based on these standards Real-Time Streaming Protocol Version 2.0 and RTP Payload Format for H.264 Video . 我只想提一下这个想法,但基本上,它是基于这些标准的H.264 Video 实时流协议2.0版RTP有效负载格式在Java中实现RTSP服务器和RTP协议。 This task will be very long and hard. 这项任务将非常漫长而艰巨。 But if you are doing your PhP it could be nice to have a nice RTSP Java lib for Android. 但是,如果您正在做PhP,那么拥有一个适用于Android的RTSP Java库可能会很好。

  2. NDK implementation. NDK实现。

    This is alternative include various solutions. 这是备选方案,包括各种解决方案。 The main idea is to use a power C or C++ library in our Android application. 主要思想是在我们的Android应用程序中使用功能强大的C或C ++库。 For this instance, FFmpeg. 对于此实例,FFmpeg。 This library can be compiled for Android and may support various architectures. 该库可以针对Android进行编译,并且可以支持各种架构。 The problem of this approach is that you may need to learn about the Android NDK, C and C++ to accomplish this. 这种方法的问题是您可能需要了解Android NDK,C和C ++才能完成此任务。

    But there is an alternative. 但是还有另一种选择。 You can wrap the c library and use the FFmpeg. 您可以包装c库并使用FFmpeg。 But how? 但是如何?

    For example, using FFmpeg Android , which has been compiled with x264, libass, fontconfig, freetype and fribidi and supports various architectures. 例如,使用FFmpeg Android ,它已与x264,libass,fontconfig,freetype和fribidi一起编译,并支持各种体系结构。 But it still hard to program the if you want to stream in real-time you need to deal with file descriptors and in/out streams. 但是,如果要实时流式传输,仍然很难编程,就需要处理文件描述符和输入/输出流。

    The best alternative, from a Java programming point of view, is to use JavaCV . 从Java编程的角度来看,最好的选择是使用JavaCV JavaCV uses wrappers from commonly used libraries of computer vision that includes: ( OpenCV , FFmpeg , etc, and provides utility classes to make their functionality easier to use on the Java platform, including (of course) Android. JavaCV使用来自常用计算机视觉库的包装器,这些包装器包括:( OpenCVFFmpeg等),并提供实用程序类以使其功能更易于在Java平台(当然包括Android)上使用。

    JavaCV also comes with hardware accelerated full-screen image display ( CanvasFrame and GLCanvasFrame ), easy-to-use methods to execute code in parallel on multiple cores ( Parallel ), user-friendly geometric and color calibration of cameras and projectors ( GeometricCalibrator , ProCamGeometricCalibrator , ProCamColorCalibrator ), detection and matching of feature points ( ObjectFinder ), a set of classes that implement direct image alignment of projector-camera systems (mainly GNImageAligner , ProjectiveTransformer , ProjectiveColorTransformer , ProCamTransformer , and ReflectanceInitializer ), a blob analysis package ( Blobs ), as well as miscellaneous functionality in the JavaCV class. JavaCV还具有硬件加速的全屏图像显示( CanvasFrameGLCanvasFrame ),易于使用的方法以在多核上并行执行代码( Parallel ),相机和投影仪的用户友好型几何和颜色校准( GeometricCalibratorProCamGeometricCalibratorProCamColorCalibrator ),特征点的检测和匹配( ObjectFinder ),实现投影仪与相机系统直接图像对齐的一组类(主要是GNImageAlignerProjectiveTransformerProjectiveColorTransformerProCamTransformerReflectanceInitializer ),斑点分析包( Blobs ),以及JavaCV类中的其他功能。 Some of these classes also have an OpenCL and OpenGL counterpart, their names ending with CL or starting with GL , ie: JavaCVCL , GLCanvasFrame , etc. 其中一些类还具有OpenCL和OpenGL对应类,它们的名称以CL结尾或以GL开头,即: JavaCVCLGLCanvasFrame等。

But how can we use this solution? 但是我们如何使用该解决方案?

Here we have a basic implementation to stream using UDP. 在这里,我们有一个使用UDP进行流传输的基本实现。

String streamURL = "udp://ip_destination:port";
recorder = new FFmpegFrameRecorder(streamURL, frameWidth, frameHeight, 1);
recorder.setInterleaved(false);
// video options //
recorder.setFormat("mpegts");
recorder.setVideoOption("tune", "zerolatency");
recorder.setVideoOption("preset", "ultrafast");
recorder.setVideoBitrate(5 * 1024 * 1024);
recorder.setFrameRate(30);
recorder.setSampleRate(AUDIO_SAMPLE_RATE);
recorder.setVideoCodec(AV_CODEC_ID_H264);
recorder.setAudioCodec(AV_CODEC_ID_AAC);

This part of the code shows how to initialize the FFmpegFrameRecorder object called recorder. 代码的此部分显示如何初始化称为记录器的FFmpegFrameRecorder对象。 This object will capture and encode the frames obtained from the camera and the samples obtained from the microphone. 该对象将捕获并编码从摄像机获得的帧和从麦克风获得的样本。

If you want to capture a preview in the same Android app then we need to implement a CameraPreview Class this class will convert the raw data served from the Camera and it will create the Preview and the Frame for the FFmpegFrameRecorder. 如果要在同一Android应用中捕获预览,则需要实现CameraPreview类,该类将转换从Camera提供的原始数据,并将为FFmpegFrameRecorder创建Preview和Frame。

Remember to replace the ip_destination with the ip of the pc or device where you want to send the stream. 切记将ip_destination替换为要向其发送流的PC或设备的ip。 The port can be 8080 as example. 例如,端口可以是8080。

@Override
public Mat onCameraFrame(Mat mat)
{
    if (audioRecordRunnable == null) {
        startTime = System.currentTimeMillis();
        return mat;
    }
    if (recording && mat != null) {
        synchronized (semaphore) {
            try {
                Frame frame = converterToMat.convert(mat);
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                }
                recorder.record(frame);
            } catch (FFmpegFrameRecorder.Exception e) {
                LogHelper.i(TAG, e.getMessage());
                e.printStackTrace();
            }
        }
    }
    return mat;
}

This method shows the implementation of the onCameraFrame method that get the Mat (picture) from the camera and it is converted as a Frame and recorded by the FFmpegFrameRecorder object. 此方法显示onCameraFrame方法的实现,该方法从相机获取Mat(图片),并将其转换为Frame并由FFmpegFrameRecorder对象记录。

@Override
public void onSampleReady(ShortBuffer audioData)
{
    if (recorder == null) return;
    if (recording && audioData == null) return;

    try {
        long t = 1000 * (System.currentTimeMillis() - startTime);
        if (t > recorder.getTimestamp()) {
            recorder.setTimestamp(t);
        }
        LogHelper.e(TAG, "audioData: " + audioData);
        recorder.recordSamples(audioData);
    } catch (FFmpegFrameRecorder.Exception e) {
        LogHelper.v(TAG, e.getMessage());
        e.printStackTrace();
    }
}

Same with the audio the audioData is a ShortBuffer object that will be recorder by the FFmpegFrameRecorder. 与音频相同, audioData是一个ShortBuffer对象,它将由FFmpegFrameRecorder记录。

In the PC or device destination you can run the following command to get the stream. 在PC或设备目标中,您可以运行以下命令以获取流。

ffplay udp://ip_source:port

The ip_source is the ip of the smartphone that is streaming the camera and mic stream. ip_source是正在传输摄像头和麦克风流的智能手机的ip。 The port must be the same 8080. 端口必须与8080相同。

I created a solution in my github repository here: UDPAVStreamer . 我在github存储库中创建了一个解决方案: UDPAVStreamer

Good luck 祝好运

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM