[英]How to output array of shape [ 1, 28, 28,1] from a tflite model as image in android
我有一个保存的tflite model其输入和 output 详细信息如下:
输入:[{'name': 'dense_4_input', 'index': 0, 'shape': array([ 1, 100], dtype=int32), 'shape_signature': array([ 1, 100], dtype=int32 ), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), “量化维度”:0},“稀疏参数”:{}}]
Output : [{'name': 'Identity', 'index': 22, 'shape': array([ 1, 28, 28, 1], dtype=int32), 'shape_signature': array([ 1, 28, 28, 1], dtype=int32), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array( [], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
如何将 output 显示为 android 应用程序上的图像,使用 Java 和 ZCB20B802A3F0255E0Z5FB44E?
import android.content.res.AssetManager
import android.graphics.Bitmap
import android.util.Log
import org.tensorflow.lite.Interpreter
import org.tensorflow.lite.Tensor
import java.io.FileInputStream
import java.lang.StringBuilder
import java.nio.ByteBuffer
import java.nio.ByteOrder
import java.nio.channels.FileChannel
class ImgPredictor(val assetManager: AssetManager, modelFilename: String) {
private var tflite: Interpreter
private var input: ByteBuffer
private var output: ByteBuffer
init {
val tfliteOptions = Interpreter.Options()
val fd = assetManager.openFd(modelFilename)
val inputStream = FileInputStream(fd.fileDescriptor)
val fileChannel: FileChannel = inputStream.channel
val startOffset: Long = fd.startOffset
val declaredLength: Long = fd.declaredLength
val mbb = fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
tflite = Interpreter(mbb, tfliteOptions)
Log.i("ImgPredictor", "interpreter: ${tflite.detail()}")
input = ByteBuffer.allocate(100 * Int.SIZE_BYTES)
input.order(ByteOrder.nativeOrder())
output = ByteBuffer.allocate(1 * 28 * 28 * 1 * Int.SIZE_BYTES)
output.order(ByteOrder.nativeOrder())
}
fun predict(data: IntArray): Bitmap {
val startTs = System.currentTimeMillis();
input.clear()
output.clear()
input.rewind()
for (i in 0 until 100) {
input.putInt(data[i])
}
tflite.run(input, output)
val bitmap = Bitmap.createBitmap(28, 28, Bitmap.Config.ARGB_8888);
// vector is your int[] of ARGB
bitmap.copyPixelsFromBuffer(output)
return bitmap
}
}
fun Tensor.detail(): String {
return "[shape: ${this.shape().toList()} dataType: ${this.dataType()}, bytes: ${this.numBytes()}]"
}
fun Interpreter.detail(): String {
val sb = StringBuilder("interpreter: \n")
sb.append("input: { \n")
for (i in 0 until this.inputTensorCount) {
sb.append(" ").append(this.getInputTensor(i).detail()).append("\n")
}
sb.append("}, \n")
sb.append("output: { \n")
for (i in 0 until this.outputTensorCount) {
sb.append(" ").append(this.getOutputTensor(i).detail()).append("\n")
}
sb.append("}")
return sb.toString()
}
您可以在此处查看官方教程以获取更多详细信息: 对象检测解释器示例。
但这里有一些你应该注意的点:
1. 尽量让implementation 'org.tensorflow:tensorflow-lite:xxx'
与您的 PC 保持相同的版本,因为某些操作可能无法在较低版本中运行。
2. 使用一些细节函数来打印解释器的输入/输出。
3.检查输入output数据缓冲顺序endian。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.