简体   繁体   English

Tensorflow 2.0 保存训练有素的 model 用于服务

[英]Tensorflow 2.0 save trained model for serving

Please help me.请帮我。 I am using Tensorflow 2.0 GPU.我正在使用 Tensorflow 2.0 GPU。 I train the model and save in.h5 format我训练 model 并保存为 .h5 格式

model = keras.Sequential()
model.add(layers.Bidirectional(layers.CuDNNLSTM(self._window_size, return_sequences=True),
                               input_shape=(self._window_size, x_train.shape[-1])))
model.add(layers.Dropout(rate=self._dropout, seed=self._seed))
model.add(layers.Bidirectional(layers.CuDNNLSTM((self._window_size * 2), return_sequences=True)))
model.add(layers.Dropout(rate=self._dropout, seed=self._seed))
model.add(layers.Bidirectional(layers.CuDNNLSTM(self._window_size, return_sequences=False)))
model.add(layers.Dense(units=1))
model.add(layers.Activation('linear'))
model.summary()

model.compile(
    loss='mean_squared_error',
    optimizer='adam'
)
# обучаем модель
history = model.fit(
    x_train,
    y_train,
    epochs=self._epochs,
    batch_size=self._batch_size,
    shuffle=False,
    validation_split=0.1
)

model.save('rts.h5')

Then I load this model and use it for forecasting and everything works.然后我加载这个 model 并将其用于预测,一切正常。

model = keras.models.load_model('rts.h5')
y_hat = model.predict(x_test)

But the question arose of using a trained model in Tensorflow Serving.但是出现了在 Tensorflow 服务中使用训练有素的 model 的问题。 And the model in.h5 format is not accepted.并且不接受.h5格式的model。 I run:我跑:

sudo docker run --gpus 1 -p 8501:8501 --mount type=bind,source=/home/alex/PycharmProjects/TensorflowServingTestData/RtsModel,target=/models/rts_model -e MODEL_NAME=rts_model -t tensorflow/serving:latest-gpu

But the question arose of using a trained model in Tensorflow Serving.但是出现了在 Tensorflow 服务中使用训练有素的 model 的问题。 And the model in.h5 format is not accepted.并且不接受.h5格式的model。 I run: And I get the error:我跑:我得到错误:

tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:267] No versions of servable rts_model found under base path /models/rts_model

I try to save the trained model as described here, https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators :我尝试保存训练有素的 model,如此处所述, https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators 在此处输入图像描述

And I get the error:我得到了错误:

ValueError: Layer has 2 states but was passed 0 initial states.

I tried to save the model as follows, https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model :我尝试将 model 保存如下, https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model : 在此处输入图像描述

And have the same error:并有同样的错误:

ValueError: Layer has 2 states but was passed 0 initial states.

The only thing that works to save the model in the format for Tensorflow Serving is:以 Tensorflow Serving 格式保存 model 的唯一方法是:

keras.experimental.export_saved_model(model, 'saved_model/1/')

Saved model work in Serving.保存 model 在服务中的工作。 But I get a warning that this method is deprecated and will be removed in a future version.但我收到警告说此方法已被弃用,并将在未来版本中删除。

Instructions for updating:
Please use `model.save(..., save_format="tf")` or `tf.keras.models.save_model(..., save_format="tf")`.

And it closed me.它关闭了我。 When I try to use these methods, it gives an error.当我尝试使用这些方法时,它给出了一个错误。 When I use what works, writes that it is deprecated.当我使用有效的方法时,写道它已被弃用。

Please, help.请帮忙。 How to save a trained model in Tensorflow 2.0.如何在 Tensorflow 2.0 中保存经过训练的 model。 so that it can be used for Tensorflow Serving.以便它可以用于 Tensorflow 服务。

I was trying to fix this too!我也试图解决这个问题!

According to the answer here the normal LSTM (ie tf.keras.layers.LSTM ) will use GPU, and should be used in general over the cuDNNLSTM class unless you specifically need the original implementation (not sure why you would).根据这里的答案,普通 LSTM(即tf.keras.layers.LSTM )将使用 GPU,并且通常应该在 cuDNNLSTM ZA2F2ED4F8EBC2CBB4C21A29DC40AB61 上使用(除非你特别需要)

According todocs the normal LSTM will use cuDNN implementation if some requirements are met (see below).根据文档,如果满足某些要求,正常的 LSTM 将使用 cuDNN 实现(见下文)。

When using this LSTM layer, I could successfully save to the tf output type, just using model.save_model('output_path', save_format='tf')使用这个 LSTM 层时,我可以成功保存到 tf output 类型,只需使用model.save_model('output_path', save_format='tf')

Requirements for LSTM using cuDNN are as follows (note that all the requirements are met with the defaults):使用 cuDNN 的 LSTM 的要求如下(注意所有要求都满足默认值):

If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.如果 GPU 可用并且该层的所有 arguments 都满足 CuDNN kernel 的要求(详见下文),则该层将使用快速 cuDNN 实现。

The requirements to use the cuDNN implementation are:使用 cuDNN 实现的要求是:

  1. activation == tanh
  2. recurrent_activation == sigmoid
  3. recurrent_dropout == 0
  4. unroll is False展开是 False
  5. use_bias is True Inputs are not masked or strictly right padded. use_bias 为 True 输入未被屏蔽或严格右填充。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM