简体   繁体   中英

input_tensor and output_tensor for TFLiteConverter.from_session gives TypeError (Tensor objects are only iterable when eager execution is enabled.)

I am trying to make tflite file of a model while using quantization aware training. I'm making and training the model in keras but I've run into trouble saving it ( https://github.com/tensorflow/tensorflow/issues/27880 ). It doesn't seem possible to save the graph as an h5 file and then convert it to a tflite file so I am trying to save the file directly from the session.

When I do this I get the error that the inputs and output tensors are not iterable:

TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn. 

I've tried redefining the inputs and outputs after training. It didn't work although it could be I'm not 100% sure what the necessary format the input and output is supposed to be. Enabling eager execution throws this error:

RuntimeError: The Session graph is empty.  Add operations to the graph before calling run().

I'm working on rewriting this in just tensorflow but my code seems much less efficient. This code trains in about 3 minutes my tensorflow only code takes several days.

inputs = tf.keras.Input(shape=(feature_size,))

x = tf.keras.layers.Dense(700, activation='relu')(inputs)
x = tf.keras.layers.Dense(701, activation='relu')(x)
predictions = tf.keras.layers.Dense(1, activation='sigmoid')(x)

model = tf.keras.Model(inputs=inputs, outputs=predictions)

sess = tf.keras.backend.get_session()
tf.contrib.quantize.create_training_graph(sess.graph)
sess.run(tf.global_variables_initializer())

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(x_train, y_train, batch_size=32, epochs=4)

converter = tf.lite.TFLiteConverter.from_session(sess, input_tensors=inputs, output_tensors=predictions) #error here
converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0]: (0., 1.)}
tflite_model = converter.convert()
open("non_seq_lite", "wb").write(tflite_model)

I'm hoping this will build a tflite file but instead it gives the above error. Thank you for any help.

Edit: If I change it input_tensors and output tensors to [inputs] and [outputs] or model.inputs model.outputs I get this error:

2019-05-09 08:16:59.344160: W tensorflow/c/c_api.cc:696] Operation '{name:'dense_2/Sigmoid' id:65 op device:{} def:{{{node dense_2/Sigmoid}} = Sigmoid[T=DT_FLOAT](dense_2/act_quant/FakeQuantWithMinMaxVars:0)}}' was changed by updating input tensor after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
WARNING:tensorflow:From C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/4
2019-05-09 08:17:00.653920: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library cublas64_100.dll locally
336371/336371 [==============================] - 46s 137us/sample - loss: 0.6932 - acc: 0.8533
Epoch 2/4
336371/336371 [==============================] - 46s 136us/sample - loss: 2.1279 - acc: 0.8445
Epoch 3/4
336371/336371 [==============================] - 45s 135us/sample - loss: 2.0752 - acc: 0.8639
Epoch 4/4
336371/336371 [==============================] - 45s 135us/sample - loss: 2.0604 - acc: 0.8700
WARNING:tensorflow:From C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\lite.py:591: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\python\framework\graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
Traceback (most recent call last):
  File "non_sequential.py", line 38, in <module>
    tflite_model = converter.convert()
  File "C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\lite.py", line 455, in convert
    **converter_kwargs)
  File "C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 442, in toco_convert_impl
    input_data.SerializeToString())
  File "C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 205, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-05-09 08:20:03.997025: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:03.997372: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-05-09 08:20:03.997531: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-05-09 08:20:03.997762: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-05-09 08:20:03.997927: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-05-09 08:20:03.998204: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:03.998319: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:03.998464: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:03.998763: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:03.998998: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:03.999144: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:03.999236: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:03.999437: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:03.999516: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:03.999684: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:03.999804: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:03.999942: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.000053: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.000189: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.000306: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.000496: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.000615: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.000725: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.000848: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.001003: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.001126: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.001253: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.001337: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.001525: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.001639: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.001772: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.001893: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.002011: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.002133: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.002273: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.002415: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.002564: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.002677: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.002789: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.002912: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.003076: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.003153: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.003304: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.003384: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.003486: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.003663: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.003808: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.003928: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.004027: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.004106: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.004273: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.004409: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.006262: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 135 operators, 214 arrays (0 quantized)
2019-05-09 08:20:04.007070: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 93 operators, 154 arrays (0 quantized)
2019-05-09 08:20:04.008023: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 93 operators, 154 arrays (0 quantized)
2019-05-09 08:20:04.015266: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 53 operators, 108 arrays (1 quantized)
2019-05-09 08:20:04.016352: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 53 operators, 108 arrays (1 quantized)
2019-05-09 08:20:04.017476: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 53 operators, 108 arrays (1 quantized)
2019-05-09 08:20:04.018090: F tensorflow/lite/toco/tooling_util.cc:1702] Array dense/act_quant/AssignMinEma/dense/act_quant/min/Pow, which is an input to the Sub operator producing the output array dense/act_quant/AssignMinEma/dense/act_quant/min/sub_2, is lacking min/max data, which is necessary for quantization. If accuracy matters, either target a non-quantized output format, or run quantized training with your model from a floating point checkpoint to change the input graph to contain min/max information. If you don't care about accuracy, you can pass --default_ranges_min= and --default_ranges_max= for easy experimentation.

edit2: Giving the converter default ranges: converter.default_ranges_stats = [-3, 3] gives the following error, in addition to the above, which may explain a bit:

2019-05-09 09:01:16.690426: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 53 operators, 108 arrays (1 quantized)
2019-05-09 09:01:16.690807: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 53 operators, 108 arrays (1 quantized)
2019-05-09 09:01:16.691328: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 53 operators, 108 arrays (1 quantized)
2019-05-09 09:01:16.691469: F tensorflow/lite/toco/graph_transformations/quantize.cc:491] Unimplemented: this graph contains an operator of type (Unsupported TensorFlow op: Assign) for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).

I think keras just has some operations that aren't supported when quantizing. I think I'm going to just rewrite this in only tensorflow.

Please read this. input_tensors and output_tensors should assign with tensor name in [] .

I think, it should work like this,

converter = tf.lite.TFLiteConverter.from_session(sess, input_tensors=[inputs], output_tensors=[predictions])

or,

converter = tf.lite.TFLiteConverter.from_session(sess, input_tensors=model.inputs, output_tensors=model.outputs)

Do you need to run this?

converter = tf.lite.TFLiteConverter.from_session(sess, 
input_tensors=[inputs], output_tensors=[predictions]) #error here
converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0]: (0., 1.)}
tflite_model = converter.convert()
open("non_seq_lite", "wb").write(tflite_model)

How about simply running istead:

converter = tf.lite.TFLiteConverter.from_session(sess, 
input_tensors=[inputs], output_tensors=[predictions])
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

Based on this documentation .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM