[英]How to convert RGBA_888 to bytebuffer to feed it to tf lite model
I am using camera x to imageAnalysis use case to run tf lite model, I am getting output image format RGBA_8888.我正在使用相机 x 到 imageAnalysis 用例运行 tf lite model,我得到 output 图像格式 RGBA_8888。 How to convert it to bytebuffer to feed it to my ml model.
如何将其转换为字节缓冲区以将其提供给我的 ml model。
This is the code generated by the android studio for the ml model:这是 android 工作室为 ml model 生成的代码:
// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
// Releases model resources if no longer used.
model.close()
The code I have written for converting rgba_8888 to bytebuffer, but is giving same output data(confidences):我为将 rgba_8888 转换为字节缓冲区而编写的代码,但给出了相同的 output 数据(置信度):
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this)) { imageProxy ->
val bitmap = Bitmap.createBitmap(
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
val img = Bitmap.createScaledBitmap(bitmap, 224, 224, false)
val model = ModelFull.newInstance(context)
val byteBuffer: ByteBuffer = ByteBuffer.allocate(4 * 224 * 224 * 3)
byteBuffer.order(ByteOrder.nativeOrder())
// get 1D array of 224 * 224 pixels in image
val intValues = IntArray(224 * 224)
img.getPixels(
intValues,
0,
img.width,
0,
0,
img.width,
img.height
)
// iterate over pixels and extract R, G, and B values. Add to bytebuffer.
var pixel = 0
for (i in 0 until 224) {
for (j in 0 until 224) {
val `val` = intValues[pixel++] // RGB
byteBuffer.putFloat((`val` shr 16 and 0xFF) * (1f / 255f))
byteBuffer.putFloat((`val` shr 8 and 0xFF) * (1f / 255f))
byteBuffer.putFloat((`val` and 0xFF) * (1f / 255f))
}
}
val inputFeature0 =
TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result .
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val confidences = outputFeature0.floatArray
Log.d("this is my array", "arr: " + Arrays.toString(confidences))
// Releases model resources if no longer used.
model.close()
imageProxy.close()
} ```
Try this, works for me.试试这个,对我有用。
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1,
400, 600, 3}, DataType.FLOAT32);
Bitmap input=Bitmap.createScaledBitmap(bitmap,400,600,true);
TensorImage image=new TensorImage(DataType.FLOAT32);
image.load(input);
ByteBuffer byteBuffer=image.getBuffer();
inputFeature0.loadBuffer(byteBuffer);
Seeinthedark.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.