I'm testing the android vision text recognizer, and in real-time use, the docs suggest I "Throttle calls to the text recognizer. If a new video frame becomes available while the text recognizer is running, drop the frame."
In the sample ocr-reader app, which shares the CameraSource
and OcrDetectorProcessor
with the ML Kit sample app, I'm trying to figure out precisely how this is accomplished. Can someone point me in the right direction? I'm lookng at the CameraPreviewCallback
and FrameProcessingRunnable
classes, but no progress yet. Thanks!
I had the exact same question, but I managed to do it like this: it is quite "manual", the idea is to have a flag before passing the processImage to your detector.
private var isProcessing = AtomicBoolean(false)
private fun process(image: FirebaseVisionImage) {
isProcessing.set(true)
detector.processImage(image)
.addOnSuccessListener { texts ->
processTextRecognitionResult(texts)
isProcessing.set(false)
}
.addOnFailureListener {
println("Detection failed with $it")
}
}
So basically
override fun analyze(imageProxy: ImageProxy?, degrees: Int) {
imageProxy?.image?.let { image ->
if(!isProcessing.get()) {
process(image)
}
}
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.