简体   繁体   中英

TFLIte Cannot set tensor: Dimension mismatch on model conversion

I've a keras model constructed as follows

module_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
backbone = hub.KerasLayer(module_url)
backbone.build([None, 224, 224, 3])
model = tf.keras.Sequential([backbone, tf.keras.layers.Dense(len(classes), activation='softmax')])
model.build([None, 224, 224, 3])
model.compile('adam', loss='sparse_categorical_crossentropy')

Then I load Caltech101 dataset from TF hub as follows

samples, info = tfds.load("caltech101", with_info=True)
train_samples, test_samples = samples['train'], samples['test']
def normalize(row):
    image, label = row['image'], row['label']
    image = tf.dtypes.cast(image, tf.float32)
    image = tf.image.resize(image, (224, 224))
    image = image / 255.0
    return image, label
train_data = train_samples.repeat().shuffle(1024).map(normalize).batch(32).prefetch(1)
test_data = test_samples.map(normalize).batch(1)

Now i'm ready to train and save my model as follows:

model.fit_generator(train_data, epochs=1, steps_per_epoch=100)
saved_model_dir = './output'
tf.saved_model.save(model, saved_model_dir)

At this point the model is usuable, I can evaluate an input of shape (224, 224, 3). I try to convert this model as follows:

def generator2():
  data = train_samples
  for _ in range(num_calibration_steps):
    images = []
    for image, _ in data.map(normalize).take(1):
      images.append(image)
    yield images

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

converter.representative_dataset = tf.lite.RepresentativeDataset(generator2)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_default_quant_model = converter.convert()

The conversion triggers the following error

/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/optimize/tensorflow_lite_wrap_calibration_wrapper.py in FeedTensor(self, input_value)
    110 
    111     def FeedTensor(self, input_value):
--> 112         return _tensorflow_lite_wrap_calibration_wrapper.CalibrationWrapper_FeedTensor(self, input_value)
    113 
    114     def QuantizeModel(self, input_py_type, output_py_type, allow_float):

ValueError: Cannot set tensor: Dimension mismatch

Now there is a similar question but in there case they are loading an already converted model unlike my case where the issue happens when I try to convert a model.

The converter object is an auto generated class from C++ code using SWIG which makes it difficult to inspect. How can I found the exact Dimension expected by the converter object?

Had the same problem when using

def representative_dataset_gen():
    for _ in range(num_calibration_steps):
        # Get sample input data as a numpy array in a method of your choosing.
        yield [input]

from https://www.tensorflow.org/lite/performance/post_training_quantization . It seems that converter.representative_dataset expects a list containing one example with shape (1, input_shape) . That is, using something along the lines

def representative_dataset_gen():
    for i in range(num_calibration_steps):
        # Get sample input data as a numpy array in a method of your choosing.
        yield [input[i:i+1]]

if input has shape (num_samples, input_shape) , solved the problem. In your case, when using tf Datasets, a working example would be

import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds

samples, info = tfds.load("caltech101", with_info=True)
train_samples, test_samples = samples['train'], samples['test']

def normalize(row):
    image, label = row['image'], row['label']
    image = tf.dtypes.cast(image, tf.float32)
    image = tf.image.resize(image, (224, 224))
    image = image / 255.0
    return image, label

train_data = train_samples.repeat().shuffle(1024).map(normalize).batch(32).prefetch(1)
test_data = test_samples.map(normalize).batch(1)

module_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
backbone = hub.KerasLayer(module_url)
backbone.build([None, 224, 224, 3])
model = tf.keras.Sequential([backbone, tf.keras.layers.Dense(102, activation='softmax')])
model.build([None, 224, 224, 3])
model.compile('adam', loss='sparse_categorical_crossentropy')

model.fit_generator(train_data, epochs=1, steps_per_epoch=100)
saved_model_dir = 'output/'
tf.saved_model.save(model, saved_model_dir)

num_calibration_steps = 50

def generator():
    single_batches = train_samples.repeat(count=1).map(normalize).batch(1) 
    i=0
    while(i<num_calibration_steps):
        for batch in single_batches:
            i+=1
            yield [batch[0]]

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

converter.representative_dataset = tf.lite.RepresentativeDataset(generator)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_default_quant_model = converter.convert()

I had the same problem, I used this solution, setting inputs_test as your input test it should work also for you:

def representative_dataset():
    arrs=np.expand_dims(inputs_test, axis=1).astype(np.float32)
    for data in arrs:
      yield [ data ]
    
    
    
    
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
tflite_quant_model = converter.convert()

Applied this on a raspberry pi and did worked, just be sure to install tflite outside of your venev

import tflite_runtime.interpreter as tflite

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM