[英]Need to capture a still image during face detection with MLKit and Camera2
我正在使用Camera2和MLKit开发人脸检测功能。
在开发人员指南的性能提示部分,他们说如果使用 Camera2 API,则以ImageFormat.YUV_420_888
格式捕获图像,这是我的情况。
然后,在人脸检测器部分,他们建议使用尺寸至少为 480x360 像素的图像进行实时人脸识别,这也是我的情况。
好的,让我们 go,这是我的代码,运行良好
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)
val previewSize = if (isPortrait) {
Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
} else {
Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
}
// Initialize an image reader which will be used to display a preview
imageReader = ImageReader.newInstance(
previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)
// Retrieve preview's frame and run detector
imageReader.setOnImageAvailableListener({ reader ->
lifecycleScope.launch(Dispatchers.Main) {
val image = reader.acquireNextImage()
logD { "Image available: ${image.timestamp}" }
faceDetector.runFaceDetection(image, getRotationCompensation())
image.close()
}
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
session = createCaptureSession(cameraDevice, targets, cameraHandler)
val captureRequest = cameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(viewfinder.holder.surface)
addTarget(imageReader.surface)
}
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}
现在,我想捕捉静止图像......这是我的问题,因为理想情况下,我想要:
Camera2Basic 示例演示了如何捕获图像(视频和慢动作的示例正在崩溃),而MLKit 示例使用了如此古老的相机 API,! 幸运的是,我成功地混合了这些样本来开发我的功能,但我未能捕捉到具有不同分辨率的静止图像。
我想我必须停止预览 session 才能重新创建一个用于图像捕获,但我不确定......
我所做的是以下,但它是在 480x360 中捕获图像:
session.stopRepeating()
// Unset the image reader listener
imageReader.setOnImageAvailableListener(null, null)
// Initialize an new image reader which will be used to capture still photos
// imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader - >
val image = reader.acquireNextImage()
logD {"[Still] Image available in queue: ${image.timestamp}"}
if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
imageQueue.take().close()
}
imageQueue.add(image)
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
val captureRequest = createStillCaptureRequest(cameraDevice, targets)
session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
logD {"Capture result received: $resultTimestamp"}
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable {
continuation.resumeWithException(exc)
}
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
@ Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(continuation.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
if (image.timestamp != resultTimestamp)
continue
logD {"Matching image dequeued: ${image.timestamp}"}
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take()
.close()
}
// Compute EXIF orientation metadata
val rotation = getRotationCompensation()
val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
logE {"captured image size (w/h): ${image.width} / ${image.height}"}
// Build the result and resume progress
continuation.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}
如果我取消注释新的 ImageReader 实例,我有这个例外:
java.lang.IllegalArgumentException:CaptureRequest 包含未配置的输入/输出表面!
谁能帮我?
这个IllegalArgumentException
:
java.lang.IllegalArgumentException:CaptureRequest 包含未配置的输入/输出表面!
...显然是指imageReader.surface
。
Meanhile(使用 CameraX)这工作方式不同,请参阅CameraFragment.kt ...
问题 #197: Firebase 人脸检测 Api 问题,同时使用 cameraX API ;
可能很快就会有一个与您的用例匹配的示例应用程序。
ImageReader 对格式的选择和/或使用标志的组合很敏感。 文档指出某些格式组合可能不受支持。 对于某些 Android 设备(可能是一些较旧的手机型号),您可能会发现使用 JPEG 格式不会引发IllegalArgumentException
。 但这并没有多大帮助——你想要多才多艺的东西。
我过去所做的是使用ImageFormat.YUV_420_888
格式(这将由硬件和 ImageReader 实现支持)。 此格式不包含阻止应用程序通过内部平面阵列访问图像的预优化。 我注意到您已经在您的initializeCamera()
方法中成功使用了它。
然后,您可以从您想要的帧中提取图像数据
Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();
然后通过 Bitmap 使用 JPEG 压缩、PNG 或您选择的任何编码创建静止图像。
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.