简体   繁体   English

Android:如何使用回调显示相机预览?

[英]Android: how to display camera preview with callback?

What I need to do is quite simple, I want to manually display preview from camera using camera callback and I want to get at least 15fps on a real device. 我需要做的很简单,我想使用相机回调手动显示来自相机的预览,我想在真实设备上获得至少15fps。 I don't even need the colors, I just need to preview grayscale image. 我甚至不需要颜色,我只需要预览灰度图像。 Images from camera are in YUV format and you have to process it somehow, which is the main performance problem. 来自相机的图像是YUV格式,你必须以某种方式处理它,这是主要的性能问题。 I'm using API 8. 我正在使用API​​ 8。

In all cases I'm using camera.setPreviewCallbackWithBuffer(), that is faster than camera.setPreviewCallback(). 在所有情况下,我都使用camera.setPreviewCallbackWithBuffer(),它比camera.setPreviewCallback()更快。 It seems that I cant get about 24 fps here, if I'm not displaying the preview. 如果我没有显示预览,我似乎无法在这里获得大约24 fps。 So there is not the problem. 所以没有问题。

I have tried these solutions: 我试过这些解决方案:

1. Display camera preview on a SurfaceView as a Bitmap. 1.在SurfaceView上将相机预览显示为位图。 It works, but the performance is about 6fps. 它有效,但性能约为6fps。

baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);

yuvimage.compressToJpeg(new Rect(0, 0, prevX, prevY), 80, baos);
jdata = baos.toByteArray();

bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length); // Convert to Bitmap, this is the main issue, it takes a lot of time

canvas.drawBitmap(bmp , 0, 0, paint);


2. Display camera preview on a GLSurfaceView as a texture. 2.在GLSurfaceView上将相机预览显示为纹理。 Here I was displaying only luminance data (greyscale image), which is quite easy, it requires only one arraycopy() on each frame. 这里我只显示亮度数据(灰度图像),这非常简单,每帧只需要一个arraycopy()。 I can get about 12fps, but I need to apply some filters to the preview and it seems, that it can't be done fast in OpenGL ES 1. So I can't use this solution. 我可以得到大约12fps,但我需要在预览中应用一些过滤器,看起来,它无法在OpenGL ES 1中快速完成。所以我不能使用这个解决方案。 Some details of this in another question . 另一个问题中的一些细节。


3. Display camera preview on a (GL)SurfaceView using NDK to process the YUV data. 3.使用NDK在(GL)SurfaceView上显示相机预览以处理YUV数据。 I find a solution here that uses some C function and NDK. 在这里找到了一个使用一些C函数和NDK的解决方案。 But I didn't manage to use it, here some more details . 但我没有设法使用它, 这里有更多细节 But anyway, this solution is done to return ByteBuffer to display it as a texture in OpenGL and it won't be faster than the previous attempt. 但无论如何,这个解决方案是为了返回ByteBuffer在OpenGL中将其显示为纹理,并且它不会比之前的尝试更快。 So I would have to modify it to return int[] array, that can be drawn with canvas.drawBitmap(), but I don't understand C enough to do this. 所以我必须修改它以返回int []数组,可以使用canvas.drawBitmap()绘制,但我不理解C足以执行此操作。


So, is there any other way that I'm missing or some improvement to the attempts I tried? 那么,我有没有其他方法可以避免或尝试一些改进?

I'm working on exactly the same issue, but haven't got quite as far as you have. 我正在研究完全相同的问题,但还没有达到你的目标。

Have you considered drawing the pixels directly to the canvas without encoding them to JPEG first? 您是否考虑过将像素直接绘制到画布而不先将其编码为JPEG? Inside the OpenCV kit http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (which doesn't actually use opencv; don't worry), there's a project called tutorial-0-androidcamera that demonstrates converting the YUV pixels to RGB and then writing them directly to a bitmap. 在OpenCV工具包里面http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (实际上并没有使用opencv;不用担心),有一个名为tutorial-0-androidcamera的项目,演示如何将YUV像素转换为RGB,然后将它们直接写入位图。

The relevant code is essentially: 相关代码基本上是:

public void onPreviewFrame(byte[] data, Camera camera, int width, int height) {
    int frameSize = width*height;
    int[] rgba = new int[frameSize+1];

    // Convert YUV to RGB
    for (int i = 0; i < height; i++)
        for (int j = 0; j < width; j++) {
            int y = (0xff & ((int) data[i * width + j]));
            int u = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 0]));
            int v = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 1]));
            y = y < 16 ? 16 : y;

            int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
            int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
            int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));

            r = r < 0 ? 0 : (r > 255 ? 255 : r);
            g = g < 0 ? 0 : (g > 255 ? 255 : g);
            b = b < 0 ? 0 : (b > 255 ? 255 : b);

            rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
        }

    Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
    bmp.setPixels(rgba, 0/* offset */, width /* stride */, 0, 0, width, height);
    Canvas canvas = mHolder.lockCanvas();
    if (canvas != null) {
        canvas.drawBitmap(bmp, (canvas.getWidth() - width) / 2, (canvas.getHeight() - height) / 2, null);
        mHolder.unlockCanvasAndPost(canvas);
    } else {
        Log.w(TAG, "Canvas is null!");
    }
    bmp.recycle();
}

Of course you'd have to adapt it to meet your needs (ex. not allocating rgba each frame), but it might be a start. 当然你必须调整它以满足你的需求(例如,不是每帧分配rgba),但它可能是一个开始。 I'd love to see if it works for you or not -- i'm still fighting problems orthogonal to yours at the moment. 我很想知道它是否适合你 - 我现在还在与你的问题正交。

I think Michael's on the right track. 我认为迈克尔在正确的轨道上。 First you can try this method to convert from RGB to Grayscale. 首先,您可以尝试使用此方法将RGB转换为灰度。 Clearly it's doing almost the same thing as his,but a little more succinctly for what you want. 显然,它与他做的几乎完全一样,但对你想要的东西更简洁一些。

//YUV Space to Greyscale
static public void YUVtoGrayScale(int[] rgb, byte[] yuv420sp, int width, int height){
    final int frameSize = width * height;
    for (int pix = 0; pix < frameSize; pix++){
        int pixVal = (0xff & ((int) yuv420sp[pix])) - 16;
        if (pixVal < 0) pixVal = 0;
        if (pixVal > 255) pixVal = 255;
        rgb[pix] = 0xff000000 | (pixVal << 16) | (pixVal << 8) | pixVal;
    }
}

} }

Second, don't create a ton of work for the garbage collector. 其次,不要为垃圾收集器创造大量的工作。 Your bitmaps and arrays are going to be a fixed size. 您的位图和数组将是固定大小。 Create them once, not in onFramePreview. 创建一次,而不是onFramePreview。

Doing that you'll end up with something that looks like this: 这样你最终会得到这样的东西:

    public PreviewCallback callback = new PreviewCallback() {
    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        if ( (mSelectView == null) || !inPreview )
            return;
        if (mSelectView.mBitmap == null)
        {
            //initialize SelectView bitmaps, arrays, etc
            //mSelectView.mBitmap = Bitmap.createBitmap(mSelectView.mImageWidth, mSelectView.mImageHeight, Bitmap.Config.RGB_565);
           //etc

        }
        //Pass Image Data to SelectView
        System.arraycopy(data, 0, mSelectView.mYUVData, 0, data.length);
        mSelectView.invalidate();
    }
};

And then the canvas where you want to put it looks like this: 然后你要放的画布看起来像这样:

class SelectView extends View {
Bitmap mBitmap;
Bitmap croppedView;
byte[] mYUVData;
int[] mRGBData;
int mImageHeight;
int mImageWidth;

public SelectView(Context context){
    super(context);
    mBitmap = null;
    croppedView = null;
}

@Override
protected void onDraw(Canvas canvas){
    if (mBitmap != null)
    {
        int canvasWidth = canvas.getWidth();
        int canvasHeight = canvas.getHeight();
        // Convert from YUV to Greyscale
        YUVtoGrayScale(mRGBData, mYUVData, mImageWidth, mImageHeight);
            mBitmap.setPixels(mRGBData, 0, mImageWidth, 0, 0, mImageWidth, mImageHeight);
            Rect crop = new Rect(180, 220, 290, 400);
        Rect dst = new Rect(0, 0, canvasWidth, (int)(canvasHeight/2));
        canvas.drawBitmap(mBitmap, crop, dst, null);
    }
    super.onDraw(canvas);
}

This example shows a cropped and distorted selection of the camera preview in real time, but you get the idea. 此示例实时显示了相机预览的裁剪和失真选择,但您明白了。 It runs at high FPS on a Nexus S in greyscale and should work for your needs as well. 它在灰度级的Nexus S上以高FPS运行,也可以满足您的需求。

Is this not what you want? 这不是你想要的吗? Just use a SurfaceView in your layout, then somewhere in your init like onResume() : 只需在布局中使用SurfaceView,然后在init中的某个位置使用onResume()

SurfaceView surfaceView = ...
SurfaceHolder holder = surfaceView.getHolder();
...
Camera camera = ...;
camera.setPreviewDisplay(holder);

It just sends the frames straight to the view as fast as they arrive. 它只是在帧到达时将帧直接发送到视图。

If you want grayscale, modify the camera parameters with setColorEffect("mono") . 如果需要灰度,请使用setColorEffect("mono")修改相机参数。

For very basic and simple effects, there is 对于非常基本和简单的效果,有

Camera.Parameters parameters = mCamera.getParameters();
parameters.setColorEffect(Parameters.EFFECT_AQUA);

I figured out that this effects do DIFFERENTLY depending on the device. 我发现这种效果会根据设备的不同而有所不同。 For instance, on my phone (galaxy s II) it looks kinda like a comic effect as in contrast to the galaxy s 1 it is 'just' a blue shade. 例如,在我的手机(galaxy s II)上,它看起来有点像漫画效果,与星系相比,它只是一个蓝色的阴影。

It's pro: It's working as live-preview. 它是专业的:它作为实时预览工作。

I looked around some other camera apps and they obviously also faced this problem. 我环顾了一些其他相机应用程序,他们显然也遇到了这个问题。 So what did they do? 那么他们做了什么? They are capturing the default camera image, applying a filter to the bitmap data, and show this image in a simple ImageView. 他们捕获默认的摄像机图像,对位图数据应用滤镜,并在简单的ImageView中显示此图像。 It's for sure not that cool as in live preview, but you won't ever face performance problems. 这肯定不像实时预览那样酷,但你不会遇到性能问题。

I believe I read in a blog that the grayscale data is in the first x*y bytes. 我相信我在博客中读到灰度数据在第一个x * y字节中。 Yuv should represent luminance, so the data is there, although it isn't a perfect grayscale. Yuv应该代表亮度,因此数据存在,尽管它不是完美的灰度。 Its great for relative brightness, but not grayscale, as each color isn't as bright as each other in rgb. 它非常适合相对亮度,但不适合灰度,因为每种颜色在rgb中都不如彼此亮。 Green is usually given a stronger weight in luminosity conversions. 绿色通常在光度转换中具有更强的重量。 Hope this helps! 希望这可以帮助!

Is there any special reason that you are forced to use GLES 1.0 ? 您是否有任何特殊原因被迫使用GLES 1.0?

Because if not, see the accepted answer here: Android SDK: Get raw preview camera image without displaying it 因为如果没有,请在此处查看接受的答案: Android SDK:获取原始预览相机图像而不显示它

Generally it mentions using Camera.setPreviewTexture() in combination with GLES 2.0. 通常它会提到将Camera.setPreviewTexture()与GLES 2.0结合使用。 In GLES 2.0 you can render a full-screen-quad all over the screen, and create whatever effect you want. 在GLES 2.0中,您可以在整个屏幕上渲染全屏四边形,并创建您想要的任何效果。

It's most likely the fastest way possible. 这很可能是最快的方式。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM