[英]Why are these log statements not printing?
I'm building an object detection application (in Kotlin, for Android).我正在构建一个对象检测应用程序(在 Kotlin 中,用于 Android)。 The application uses CameraX to build a camera preview and Google ML to provide machine learning expertise.
该应用程序使用 CameraX 构建相机预览,并使用 Google ML 提供机器学习专业知识。 Just for reference;
仅供参考; I used this CameraX documentation and this this Google ML Kit documentation.
我使用了这个CameraX 文档和这个Google ML Kit 文档。
I'm currently attempting to print Log.d("TAG", "onSuccess" + it.size)
to my IDE console in order to determine if .addonSuccessListener
is actually running.我目前正在尝试将
Log.d("TAG", "onSuccess" + it.size)
打印到我的 IDE 控制台,以确定.addonSuccessListener
是否实际运行。 If it does, it should print something along the lines of onSuccess1
.如果是,它应该按照
onSuccess1
打印一些内容。 However, this isn't the case.然而,事实并非如此。 Infact, it isn't even printing the Log statement from the
.addOnFailureListener
either, which really confuses me as I'm not even entirely sure the objectDetector code is even running.事实上,它甚至没有从
.addOnFailureListener
打印 Log 语句,这让我很困惑,因为我什至不完全确定 objectDetector 代码是否正在运行。 What really puzzles me is that I have relatively completed the same project in Java and have not faced this issue.真正让我困惑的是,我用Java相对完成了同一个项目,没有遇到过这个问题。
I did have someone point out that within my YourImageAnalyzer.kt
class, if mediaImage
is null, then I won't see anything logging.我确实有人指出,在我的
YourImageAnalyzer.kt
类中,如果mediaImage
为空,那么我将看不到任何日志记录。 However, upon my own debugging (this is actually my very first time debugging), I was unable to find out if my first sentence of this paragraph is true or not.但是,经过我自己的调试(这实际上是我第一次调试),我无法确定这一段的第一句话是否正确。 I suppose this issue may provide a lead into how I'll resolve this issue, and also learn how to properly debug.
我想这个问题可能会为我如何解决这个问题提供线索,并学习如何正确调试。
Here is my YourImageAnalyzer.kt
class, and I will also add the code for my MainActivity.kt
class below as well.这是我的
YourImageAnalyzer.kt
类,我还将在下面添加我的MainActivity.kt
类的代码。
private class YourImageAnalyzer : ImageAnalysis.Analyzer {
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null) {
val image =
InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
val localModel = LocalModel.Builder()
.setAssetFilePath("mobilenet_v1_0.75_192_quantized_1_metadata_1.tflite")
.build()
val customObjectDetectorOptions =
CustomObjectDetectorOptions.Builder(localModel)
.setDetectorMode(CustomObjectDetectorOptions.STREAM_MODE)
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.setMaxPerObjectLabelCount(3)
.build()
val objectDetector =
ObjectDetection.getClient(customObjectDetectorOptions)
objectDetector //Here is where the issue stems, with the following listeners
.process(image)
.addOnSuccessListener {
Log.i("TAG", "onSuccess" + it.size)
for (detectedObjects in it)
{
val boundingBox = detectedObjects.boundingBox
val trackingId = detectedObjects.trackingId
for (label in detectedObjects.labels) {
val text = label.text
val index = label.index
val confidence = label.confidence
}
}
}
.addOnFailureListener { e -> Log.e("TAG", e.getLocalizedMessage()) }
.addOnCompleteListener { it -> imageProxy.close() }
}
}
}
class MainActivity : AppCompatActivity() {
private lateinit var cameraProviderFuture: ListenableFuture<ProcessCameraProvider>
override fun onCreate(savedInstanceState: Bundle?) {
cameraProviderFuture = ProcessCameraProvider.getInstance(this)
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
cameraProviderFuture.addListener(Runnable {
val cameraProvider = cameraProviderFuture.get()
bindPreview(cameraProvider)
}, ContextCompat.getMainExecutor(this))
}
fun bindPreview(cameraProvider: ProcessCameraProvider) {
val previewView = findViewById<PreviewView>(R.id.previewView)
var preview : Preview = Preview.Builder()
.build()
var cameraSelector : CameraSelector = CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build()
preview.setSurfaceProvider(previewView.surfaceProvider)
var camera = cameraProvider.bindToLifecycle(this as LifecycleOwner, cameraSelector, preview)
}
}
You are not binding your ImageAnalysis use case.您没有绑定 ImageAnalysis 用例。 Something in the line of:
类似的东西:
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(1280, 720))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build()
and then;进而;
imageAnalysis.setAnalyzer(executor, YourImageAnalyzer())
cameraProvider.bindToLifecycle(this as LifecycleOwner, cameraSelector, imageAnalysis, preview)
Also a suggestion as a bonus: You should get your LocalModel.Builder()
out of analyze
as this is called each time an image arrives.还有一个建议作为奖励:你应该让你的
LocalModel.Builder()
退出analyze
因为每次图像到达时都会调用它。 You do not need to execute this code piece each time as it will make your analysis slower.您不需要每次都执行这段代码,因为它会使您的分析变慢。 So move this code:
所以移动这个代码:
val localModel = LocalModel.Builder()
.setAssetFilePath("mobilenet_v1_0.75_192_quantized_1_metadata_1.tflite")
.build()
to just below of the class private class YourImageAnalyzer : ImageAnalysis.Analyzer {
.到类
private class YourImageAnalyzer : ImageAnalysis.Analyzer {
正下方private class YourImageAnalyzer : ImageAnalysis.Analyzer {
.
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.