[英]How can I draw on a video while recording it in android, and save the video and the drawing?
我正在尝试开发一个应用程序,允许我在录制时绘制视频,然后将录制和视频保存在一个mp4文件中供以后使用。 此外,我想使用camera2库,特别是我需要我的应用程序运行高于API 21的设备,我总是避免被弃用的库。
我尝试了很多方法,包括FFmpeg,其中我放置了TextureView.getBitmap()的叠加层(来自相机)和从画布中取出的位图。 它工作,但由于它是一个缓慢的功能,视频无法捕捉到足够的帧(甚至不是25 fps),并且运行得如此之快。 我也希望包含音频。
我考虑过MediaProjection库,但我不确定它是否可以捕获仅包含相机和绘图的布局,仅在其VirtualDisplay中,因为应用程序用户可能也会在视频上添加文本,而我不想要键盘出现。
请帮助,这是一个星期的研究,我发现没有什么对我来说很好。
PS:如果在用户按下“停止录制”按钮之后包含一点处理时间,我没有问题。
编辑:
现在在Eddy的回答之后,我正在使用shadercam应用程序在相机表面上进行绘制,因为应用程序进行了视频渲染,并且要做的工作是将我的画布渲染成位图然后再渲染到GL纹理中,但是我不能够成功地做到了。 我需要你的帮助,我需要完成应用程序:S
我正在使用shadercam库( https://github.com/googlecreativelab/shadercam ),我用以下代码替换了“ExampleRenderer”文件:
public class WriteDrawRenderer extends CameraRenderer
{
private float offsetR = 1f;
private float offsetG = 1f;
private float offsetB = 1f;
private float touchX = 1000000000;
private float touchY = 1000000000;
private Bitmap textBitmap;
private int textureId;
private boolean isFirstTime = true;
//creates a new canvas that will draw into a bitmap instead of rendering into the screen
private Canvas bitmapCanvas;
/**
* By not modifying anything, our default shaders will be used in the assets folder of shadercam.
*
* Base all shaders off those, since there are some default uniforms/textures that will
* be passed every time for the camera coordinates and texture coordinates
*/
public WriteDrawRenderer(Context context, SurfaceTexture previewSurface, int width, int height)
{
super(context, previewSurface, width, height, "touchcolor.frag.glsl", "touchcolor.vert.glsl");
//other setup if need be done here
}
/**
* we override {@link #setUniformsAndAttribs()} and make sure to call the super so we can add
* our own uniforms to our shaders here. CameraRenderer handles the rest for us automatically
*/
@Override
protected void setUniformsAndAttribs()
{
super.setUniformsAndAttribs();
int offsetRLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetR");
int offsetGLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetG");
int offsetBLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetB");
GLES20.glUniform1f(offsetRLoc, offsetR);
GLES20.glUniform1f(offsetGLoc, offsetG);
GLES20.glUniform1f(offsetBLoc, offsetB);
if (touchX < 1000000000 && touchY < 1000000000)
{
//creates a Paint object
Paint yellowPaint = new Paint();
//makes it yellow
yellowPaint.setColor(Color.YELLOW);
//sets the anti-aliasing for texts
yellowPaint.setAntiAlias(true);
yellowPaint.setTextSize(70);
if (isFirstTime)
{
textBitmap = Bitmap.createBitmap(mSurfaceWidth, mSurfaceHeight, Bitmap.Config.ARGB_8888);
bitmapCanvas = new Canvas(textBitmap);
}
bitmapCanvas.drawText("Test Text", touchX, touchY, yellowPaint);
if (isFirstTime)
{
textureId = addTexture(textBitmap, "textBitmap");
isFirstTime = false;
}
else
{
updateTexture(textureId, textBitmap);
}
touchX = 1000000000;
touchY = 1000000000;
}
}
/**
* take touch points on that textureview and turn them into multipliers for the color channels
* of our shader, simple, yet effective way to illustrate how easy it is to integrate app
* interaction into our glsl shaders
* @param rawX raw x on screen
* @param rawY raw y on screen
*/
public void setTouchPoint(float rawX, float rawY)
{
this.touchX = rawX;
this.touchY = rawY;
}
}
请帮助大家,这是一个月,我仍然坚持使用相同的应用程序:(并且不知道opengl。两个星期,我正在尝试将此项目用于我的应用程序,并且视频上没有任何内容呈现。
提前致谢!
这是一个应该有效的粗略轮廓,但这是相当多的工作:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.