简体   繁体   English

通过使用 C++ 获取输入和输出的 TFLite 分段故障

[英]TFLite Segementation Fault by getting inputs and outputs with C++

I'm trying to run an TfLite Model on a x86_64 system.我正在尝试在 x86_64 系统上运行 TfLite Model。 It seems that all is working fine.似乎一切正常。 But when I try to get the input or output tensor with typed_input_tensor(0) then I get a null pointer.但是当我尝试使用 typed_input_tensor(0) 获取输入或 output 张量时,我得到一个 null 指针。

My model is a simple HelloWorldNN:我的 model 是一个简单的 HelloWorldNN:

import tensorflow as tf
import numpy as np
from tensorflow import keras

model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')

xs = np.array([-1.0,  0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)

model.fit(xs, ys, epochs=10)

print(model.predict([10.0]))

model.summary()

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("linear.tflite","wb").write(tflite_model)

For the C++ part I cloned the tensorflow git and checked out the commit d855adfc5a0195788bf5f92c3c7352e638aa1109.对于 C++ 部分,我克隆了 tensorflow git 并检查了提交 d855adfc5a0190c7588bf38aac11 This is the commit which is neccessary to using Coral hardware which I plan to use.这是使用我计划使用的 Coral 硬件所必需的提交。 I build the tensorflow-lite.a and linked it to my application.我构建了 tensorflow-lite.a 并将其链接到我的应用程序。


        std::unique_ptr<tflite::FlatBufferModel> model = 
       tflite::FlatBufferModel::BuildFromFile("linear.tflite");



        if (tflite::InterpreterBuilder(*model, resolver)(&interpreter) != kTfLiteOk) {
            std::cerr << "Failed to build interpreter." << std::endl;
        }


        if (interpreter->AllocateTensors() != kTfLiteOk) {
            std::cerr << "Failed to allocate tensors." << std::endl;
        }
        std::cout << "Number of tensors" << interpreter->tensors_size() <<" Num of Inputs "<< 
        tflite::PrintInterpreterState(interpreter.get());        
        float* input = interpreter->typed_input_tensor<float>(0);    

        interpreter->Invoke();

        float* output = interpreter->typed_output_tensor<float>(0);

If I run the code then both input and output pointers are null pointers.如果我运行代码,那么 input 和 output 指针都是 null 指针。 The output of interpreter.get() is the follow: interpreter.get() 的 output 如下:

Number of tensors8 Num of Inputs 18446732345621392436
Interpreter has 8 tensors and 3 nodes
Inputs: 4
Outputs: 5

Tensor   0 dense/BiasAdd_int8   kTfLiteInt8  kTfLiteArenaRw          1 bytes ( 0.0 MB)  1 1
Tensor   1 dense/MatMul_bias    kTfLiteInt32   kTfLiteMmapRo          4 bytes ( 0.0 MB)  1
Tensor   2 dense/kernel/transpose kTfLiteInt8   kTfLiteMmapRo          1 bytes ( 0.0 MB)  1 1
Tensor   3 dense_input_int8     kTfLiteInt8  kTfLiteArenaRw          1 bytes ( 0.0 MB)  1 1
Tensor   4 dense_input          kTfLiteFloat32  kTfLiteArenaRw          4 bytes ( 0.0 MB)  1 1
Tensor   5 dense/BiasAdd        kTfLiteFloat32  kTfLiteArenaRw          4 bytes ( 0.0 MB)  1 1
Tensor   6 (null)               kTfLiteNoType  kTfLiteMemNone          0 bytes ( 0.0 MB)  (null)
Tensor   7 (null)               kTfLiteNoType  kTfLiteMemNone          0 bytes ( 0.0 MB)  (null)

Node   0 Operator Builtin Code 114 QUANTIZE
  Inputs: 4
  Outputs: 3
Node   1 Operator Builtin Code   9 FULLY_CONNECTED
  Inputs: 3 2 1
  Outputs: 0
Node   2 Operator Builtin Code   6 DEQUANTIZE
  Inputs: 0
  Outputs: 5`

I've no idea where is my mistake.我不知道我的错误在哪里。 It worked with tensorflow 1.15.它适用于 tensorflow 1.15。 But 1.15 I can't use anymore with Coral hardware.但是 1.15 我不能再使用 Coral 硬件了。 I would be grateful for any help我将不胜感激

Ok, I found my problem.好的,我发现了我的问题。 I hadn't updated the include files.我没有更新包含文件。 The files were still from 1.15.这些文件仍然来自 1.15。 :-) :-)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM