简体   繁体   English

如何在Android中录制视频时绘制视频,并保存视频和绘图?

[英]How can I draw on a video while recording it in android, and save the video and the drawing?

I am trying to develop an app that allows me to draw on a video while recording it, and to then save both the recording and the video in one mp4 file for later use. 我正在尝试开发一个应用程序,允许我在录制时绘制视频,然后将录制和视频保存在一个mp4文件中供以后使用。 Also, I want to use the camera2 library, especially that I need my app to run for devices higher than API 21, and I am always avoiding deprecated libraries. 此外,我想使用camera2库,特别是我需要我的应用程序运行高于API 21的设备,我总是避免被弃用的库。

I tried many ways to do it, including FFmpeg in which I placed an overlay of the TextureView.getBitmap() (from the camera) and a bitmap taken from the canvas. 我尝试了很多方法,包括FFmpeg,其中我放置了TextureView.getBitmap()的叠加层(来自相机)和从画布中取出的位图。 It worked but since it is a slow function, the video couldn't catch enough frames (not even 25 fps), and it ran so fast. 它工作,但由于它是一个缓慢的功能,视频无法捕捉到足够的帧(甚至不是25 fps),并且运行得如此之快。 I want audio to be included as well. 我也希望包含音频。

I thought about the MediaProjection library, but I am not sure if it can capture the layout containg the camera and the drawing only inside its VirtualDisplay, because the app user may add text as well on the video, and I don't want the keyboard to appear. 我考虑过MediaProjection库,但我不确定它是否可以捕获仅包含相机和绘图的布局,仅在其VirtualDisplay中,因为应用程序用户可能也会在视频上添加文本,而我不想要键盘出现。

Please help, it's been a week of research and I found nothing that worked fine for me. 请帮助,这是一个星期的研究,我发现没有什么对我来说很好。

PS: I don't have a problem if a little bit of processing time is included after that the user presses the "Stop Recording"button. PS:如果在用户按下“停止录制”按钮之后包含一点处理时间,我没有问题。

EDITED: 编辑:

Now after Eddy's Answer, I am using the shadercam app to draw on the camera surface since the app does the video rendering, and the workaround to do is about rendering my canvas into a bitmap then into a GL texture, however I am not being able to do it successfully. 现在在Eddy的回答之后,我正在使用shadercam应用程序在相机表面上进行绘制,因为应用程序进行了视频渲染,并且要做的工作是将我的画布渲染成位图然后再渲染到GL纹理中,但是我不能够成功地做到了。 I need your help guys, I need to finish the app :S 我需要你的帮助,我需要完成应用程序:S

I am using the shadercam library ( https://github.com/googlecreativelab/shadercam ), and I replaced the "ExampleRenderer" file with the following code: 我正在使用shadercam库( https://github.com/googlecreativelab/shadercam ),我用以下代码替换了“ExampleRenderer”文件:

public class WriteDrawRenderer extends CameraRenderer
{
    private float offsetR = 1f;
    private float offsetG = 1f;
    private float offsetB = 1f;

    private float touchX = 1000000000;
    private float touchY = 1000000000;

    private  Bitmap textBitmap;

    private int textureId;

    private boolean isFirstTime = true;

    //creates a new canvas that will draw into a bitmap instead of rendering into the screen
    private Canvas bitmapCanvas;

    /**
     * By not modifying anything, our default shaders will be used in the assets folder of shadercam.
     *
     * Base all shaders off those, since there are some default uniforms/textures that will
     * be passed every time for the camera coordinates and texture coordinates
     */
    public WriteDrawRenderer(Context context, SurfaceTexture previewSurface, int width, int height)
    {
        super(context, previewSurface, width, height, "touchcolor.frag.glsl", "touchcolor.vert.glsl");
        //other setup if need be done here


    }

    /**
     * we override {@link #setUniformsAndAttribs()} and make sure to call the super so we can add
     * our own uniforms to our shaders here. CameraRenderer handles the rest for us automatically
     */
    @Override
    protected void setUniformsAndAttribs()
    {
        super.setUniformsAndAttribs();

        int offsetRLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetR");
        int offsetGLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetG");
        int offsetBLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetB");

        GLES20.glUniform1f(offsetRLoc, offsetR);
        GLES20.glUniform1f(offsetGLoc, offsetG);
        GLES20.glUniform1f(offsetBLoc, offsetB);

        if (touchX < 1000000000 && touchY < 1000000000)
        {
            //creates a Paint object
            Paint yellowPaint = new Paint();
            //makes it yellow
            yellowPaint.setColor(Color.YELLOW);
            //sets the anti-aliasing for texts
            yellowPaint.setAntiAlias(true);
            yellowPaint.setTextSize(70);

            if (isFirstTime)
            {
                textBitmap = Bitmap.createBitmap(mSurfaceWidth, mSurfaceHeight, Bitmap.Config.ARGB_8888);
                bitmapCanvas = new Canvas(textBitmap);
            }

            bitmapCanvas.drawText("Test Text", touchX, touchY, yellowPaint);

            if (isFirstTime)
            {
                textureId = addTexture(textBitmap, "textBitmap");
                isFirstTime = false;
            }
            else
            {
                updateTexture(textureId, textBitmap);
            }

            touchX = 1000000000;
            touchY = 1000000000;
        }
    }

    /**
     * take touch points on that textureview and turn them into multipliers for the color channels
     * of our shader, simple, yet effective way to illustrate how easy it is to integrate app
     * interaction into our glsl shaders
     * @param rawX raw x on screen
     * @param rawY raw y on screen
     */
    public void setTouchPoint(float rawX, float rawY)
    {
        this.touchX = rawX;
        this.touchY = rawY;
    }
}

Please help guys, it's been a month and I am still stuck with the same app :( and have no idea about opengl. Two weeks and I'm trying to use this project for my app, and nothing is being rendered on the video. 请帮助大家,这是一个月,我仍然坚持使用相同的应用程序:(并且不知道opengl。两个星期,我正在尝试将此项目用于我的应用程序,并且视频上没有任何内容呈现。

Thanks in advance! 提前致谢!

Here's a rough outline that should work, but it's quite a bit of work: 这是一个应该有效的粗略轮廓,但这是相当多的工作:

  1. Set up a android.media.MediaRecorder for recording the video and audio 设置android.media.MediaRecorder来录制视频和音频
  2. Get a Surface from MediaRecorder and set up an EGLImage from it ( https://developer.android.com/reference/android/opengl/EGL14.html#eglCreateWindowSurface(android.opengl.EGLDisplay , android.opengl.EGLConfig, java.lang.Object, int[], int) ); 从MediaRecorder获取Surface并从中设置EGLImage( https://developer.android.com/reference/android/opengl/EGL14.html#eglCreateWindowSurface (android.opengl.EGLDisplay,android.opengl.EGLConfig,java.lang) .Object,int [],int)); you'll need a whole OpenGL context and setup for this. 你需要一个完整的OpenGL上下文和设置。 Then you'll need to set that EGLImage as your render target. 然后,您需要将EGLImage设置为渲染目标。
  3. Create a SurfaceTexture within that GL context. 在该GL上下文中创建SurfaceTexture。
  4. Configure camera to send data to that SurfaceTexture 配置相机以将数据发送到SurfaceTexture
  5. Start the MediaRecorder 启动MediaRecorder
  6. On each frame received from camera, convert the drawing done by the user to a GL texture, and composite the camera texture and the user drawing. 在从相机接收的每个帧上,将用户完成的绘图转换为GL纹理,并合成相机纹理和用户绘图。
  7. Finally, call glSwapBuffers to send the composited frame to the video recorder 最后,调用glSwapBuffers将合成帧发送到录像机

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM