簡體   English   中英

如何在 c++ 中獲取 Tflite model output?

[英]How to get Tflite model output in c++?

我有一個 tflite model 用於掩碼檢測,其 Sigmoid 層輸出值介於 0[掩碼] 和 1[no_mask] 之間

我使用 netron 檢查了輸入和 output 節點,這就是我得到的:

Netron 顯示模型輸入和輸出

我在 python 中測試了 model 進行推理,效果很好。

# A simple inference pipline 

import numpy as np
import tensorflow as tf
import cv2


# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="efficient_net.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Rescale to [1,32,32,1].
input_shape = input_details[0]['shape']
img = cv2.imread("nomask.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
input_data = img_gray[ ..., tf.newaxis]
input_data  =tf.image.resize(input_data, [32,32])
input_data = input_data[ tf.newaxis,...]
input_data = np.array(input_data, dtype=np.float32)


# setting input 
interpreter.set_tensor(input_details[0]['index'], input_data)


interpreter.invoke()

output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data[0][0])  

我嘗試使用 c++ 做同樣的事情,但我得到 0 或沒有 output

#include <iostream>
#include <cstdio>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"
#include "opencv2/opencv.hpp"

using namespace cv;


#define TFLITE_MINIMAL_CHECK(x)                                  \
    if (!(x))                                                    \
    {                                                            \
        fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
        exit(1);                                                 \
    }

int main(int argc, char* argv[])
{
    if (argc != 2)
    {
        fprintf(stderr, "minimal <tflite model>\n");
        return 1;
    }
    const char* filename = argv[1];



    // read image file
    cv::Mat img = cv::imread("D:\\nomask.png");

    // convert to float; BGR -> Grayscale
    cv::Mat inputImg;
    img.convertTo(inputImg, CV_32FC1);
    cv::cvtColor(inputImg, inputImg, cv::COLOR_BGR2GRAY);
    // resize image as model input
    cv::resize(inputImg, inputImg, cv::Size(32, 32));
    
    
    // Load model
    std::unique_ptr<tflite::FlatBufferModel> model =
        tflite::FlatBufferModel::BuildFromFile(filename);
    TFLITE_MINIMAL_CHECK(model != nullptr);

    // Build the interpreter with the InterpreterBuilder.
    // Note: all Interpreters should be built with the InterpreterBuilder,
    // which allocates memory for the Intrepter and does various set up
    // tasks so that the Interpreter can read the provided model.
    tflite::ops::builtin::BuiltinOpResolver resolver;
    tflite::InterpreterBuilder builder(*model, resolver);
    std::unique_ptr<tflite::Interpreter> interpreter;
    builder(&interpreter);
    TFLITE_MINIMAL_CHECK(interpreter != nullptr);

    // Allocate tensor buffers.
    TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
    

    // Fill input buffers
    // TODO(user): Insert code to fill input tensors.
    // Note: The buffer of the input tensor with index `i` of type T can
    float* input = interpreter->typed_input_tensor<float>(0);
    input = inputImg.ptr<float>(0);
    
    // Run inference
    TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
    printf("\n\n=== Post-invoke Interpreter State ===\n");
    

     
     
    
    float* output = interpreter->typed_output_tensor<float>(149);
    std::cout << output[0];
  

    return 0;
}

我嘗試將 output 索引更改為 0 而不是 149,但我總是得到一個小的 output 值,表明無論輸入是什么都有一個掩碼(這不會在 Z23EEEB4347BDD26BFC6B7EE9A3B7 中發生什么錯誤?)

代碼現在適用於這些更改:

memcpy(input,img.data,32*32*sizeof(float)); 

代替

 input = inputImg.ptr<float>(0);

並為 output 使用索引 0

float* output = interpreter->typed_output_tensor<float>(0);

這里的索引表示 output 張量的順序而不是它的位置

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM