[英]Crop face from the CameraSource
I am implementing the example given in google-vision face tracker . 我正在实现google-vision人脸跟踪器中给出的示例。
MyFaceDetector
class: MyFaceDetector
类:
public class MyFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
FaceTrackerActivity
class: FaceTrackerActivity
类:
private void createCameraSource() {
imageView = (ImageView) findViewById(R.id.face);
FaceDetector faceDetector = new FaceDetector.Builder(this).build();
myFaceDetector = new MyFaceDetector(faceDetector);
myFaceDetector.setProcessor(new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
mCameraSource = new CameraSource.Builder(this, myFaceDetector)
.setRequestedPreviewSize(640, 480)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(60.0f)
.build();
if (!myFaceDetector.isOperational()) {
Log.w(TAG, "Face detector dependencies are not yet available.");
}
}
I need to crop the face and set it on ImageView
. 我需要裁切脸并将其设置在
ImageView
。 I am not able to implement my custom Frame
here. 我无法在此处实现我的自定义
Frame
。 frame.getBitmap()
always returns null
in detect(Frame frame)
. frame.getBitmap()
在detect(Frame frame)
始终返回null
。 How do I achieve this? 我该如何实现?
frame.getBitmap() will only return a value if the frame was originally created from a bitmap. 如果框架最初是从位图创建的,则frame.getBitmap()将仅返回值。 CameraSource supplies image information as ByteBuffers rather than bitmaps, so that is the image information that is available.
CameraSource提供的图像信息是ByteBuffers而不是位图,因此是可用的图像信息。
frame.getGrayscaleImageData() will return the image data. frame.getGrayscaleImageData()将返回图像数据。
frame.getMetadata() will return metadata such as the image dimensions and the image format. frame.getMetadata()将返回元数据,例如图像尺寸和图像格式。
This goes in CameraSource.java
这在
CameraSource.java
Frame outputFrame = new Frame.Builder()
.setImageData(mPendingFrameData, mPreviewSize.getWidth(),
mPreviewSize.getHeight(), ImageFormat.NV21)
.setId(mPendingFrameId)
.setTimestampMillis(mPendingTimeMillis)
.setRotation(mRotation)
.build();
int w = outputFrame.getMetadata().getWidth();
int h = outputFrame.getMetadata().getHeight();
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame);
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
if (detectedFaces.size() > 0) {
ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData();
byte[] byteBuffer = byteBufferRaw.array();
YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
Face face = detectedFaces.valueAt(0);
int left = (int) face.getPosition().x;
int top = (int) face.getPosition().y;
int right = (int) face.getWidth() + left;
int bottom = (int) face.getHeight() + top;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos);
byte[] jpegArray = baos.toByteArray();
bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
}
((FaceTrackerActivity) mContext).setBitmapToImageView(bitmap);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.