简体   繁体   中英

TFLite Converter: RandomStandardNormal implemented for keras model, but not for pure TensorFlow model

Task

I have two models which should be equivalent. The first one is built with keras, the second one with tensorflow. Both variational autoencoders use the tf.random.normal method in their model. Also, they produce similar results. Everything is tested with the nightly build (1.15).

The confusion comes when I try to convert them into a tensorflow lite model with post-training quantization. I use the same command for both models:

converter = tf.compat.v1.lite.TFLiteConverter... # from respective save file
converter.representative_dataset = representative_dataset_gen
converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()
open('vae.tflite', 'wb').write(tflite_model)

Error

For the keras model, everything goes well and I end up with a working tflite model. However, if I try to do this with the tensorflow version, I run into an error stating that RandomStandardNormal is not implemented:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, EXP, FULLY_CONNECTED, LEAKY_RELU, LOG, MUL. Here is a list of operators for which you will need custom implementations: RandomStandardNormal.

Question

This does not make sense to me because obviously this is already working with keras. Does keras know something that I have to tell my tensorflow model explicitly?

MODELS

tensorflow

fc_layer = tf.compat.v1.layers.dense

# inputs have shape (90,)
x = tf.identity(inputs, name='x')

# encoder
outputs = fc_layer(x, 40, activation=leaky_relu)

self.z_mu = fc_layer(outputs, 10, activation=None)
self.z_sigma = fc_layer(outputs, 10, activation=softplus)

# latent space
eps = tf.random.normal(shape=tf.shape((10,)), mean=0, stddev=1, dtype=tf.float32)
outputs = self.z_mu + eps * self.z_sigma

# decoder
outputs = fc_layer(outputs, 40, activation=leaky_relu)

# prediction
x_decoded = fc_layer(outputs, 90, activation=None)

keras

x = keras.layers.Input(shape=(90,))

h = keras.layers.Dense(40, activation=leakyrelu)(x)

z_mu = keras.layers.Dense(10)(h)
z_sigma = keras.layers.Dense(10, activation=softplus)(h)

eps = tf.random.normal(shape=tf.shape((10,)), mean=0, stddev=1, dtype=tf.float32)
z = z_mu + eps * z_sigma

h_decoded = keras.layers.Dense(40, activation=leakyrelu)(z)
x_decoded = keras.layers.Dense(90)(h_decoded)

train_model = keras.models.Model(x, x_decoded)

!pip install tensorflow==1.15

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5') 
converter.allow_custom_ops = True
tfmodel = converter.convert() 
open ("model.tflite" , "wb") .write(tfmodel)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM