简体   繁体   English

将 tf model 转换为 TFlite model 时出错

[英]Error when converting a tf model to TFlite model

I am currently building a model to use it onto my nano 33 BLE sense board to predict weather by mesuring Humidity, Pressure, Temperature, I have 5 classes.我目前正在构建一个 model 以将其用于我的 nano 33 BLE 传感板上,通过测量湿度、压力、温度来预测天气,我有 5 个类。 I have used a kaggle dataset to train on it.我使用了一个 kaggle 数据集来训练它。

    df_labels = to_categorical(df.pop('Summary'))
    df_features = np.array(df)
    
    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(df_features, df_labels, test_size=0.15)
    
    normalize = preprocessing.Normalization()
    normalize.adapt(X_train)
    
    
    activ_func = 'gelu'
    model = tf.keras.Sequential([
                 normalize,
                 tf.keras.layers.Dense(units=6, input_shape=(3,)),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=5, activation='softmax')
    ])
    
    model.compile(optimizer='adam',#tf.keras.optimizers.Adagrad(lr=0.001),
                 loss='categorical_crossentropy',metrics=['acc'])
    model.summary()
    model.fit(x=X_train,y=y_train,verbose=1,epochs=15,batch_size=32, use_multiprocessing=True)

Then the model is trained, I want to convert it into a tflite model when I run the command convert I get the following message:然后训练 model,我想将其转换为 tflite model,当我运行命令转换时,我收到以下消息:

    # Convert the model to the TensorFlow Lite format without quantization
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    tflite_model = converter.convert()
    
    # Save the model to disk
    open("gesture_model.tflite", "wb").write(tflite_model)
      
    import os
    basic_model_size = os.path.getsize("gesture_model.tflite")
    print("Model is %d bytes" % basic_model_size)




    <unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Erf {device = ""}

For your information I use google colab to design the model.为了您的信息,我使用 google colab 来设计 model。

If anyone has any idea or solution to this issue, I would be glad to hear it !如果有人对此问题有任何想法或解决方案,我将很高兴听到!

This often happens when you have not set the converter's supported Operations.当您没有设置转换器支持的操作时,通常会发生这种情况。

Here is an example:这是一个例子:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)

converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]

tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

This list of supported operations are constantly changing so in case the error still appears you can also try to set the experimental converter features as follow:此支持的操作列表不断变化,因此如果仍然出现错误,您还可以尝试设置实验转换器功能,如下所示:

converter.experimental_new_converter = True

I solved the problem.我解决了这个问题。 It was the activation function 'gelu' not yet supported by TFlite. TFlite 尚不支持激活 function 'gelu'。 I changed it to 'relu' and no more problem.我将其更改为“relu”,不再有问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM