[英]How to load pretrained Tensorflow model from Google Cloud Storage into Datalab
我已經在 Tensorflow (v2.0) Keras 本地訓練了 model。 我現在需要將這個預訓練的 model 上傳到 Google Datalab 中,以便對大量數據進行預測。 Datalab 上可用的 Tenserflow 版本是 1.8,但我假設向后兼容。
我已將保存的 model (model.h5) 上傳到 Google Cloud Storage。 我嘗試將它加載到 Datalab 中的 Jupyter Notebook 中,如下所示:
%%gcs read --object gs://project-xxx/data/saved_model.h5 --variable ldmodel
model = keras.models.load_model(ldmodel)
這會引發錯誤:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-18-07c40785a14b> in <module>()
----> 1 model = keras.models.load_model(ldmodel)
/usr/local/envs/py3env/lib/python3.5/site-
packages/tensorflow/python/keras/_impl/keras/engine/saving.py in load_model(filepath,
custom_objects,
compile)
233 return obj
234
--> 235 with h5py.File(filepath, mode='r') as f:
236 # instantiate model
237 model_config = f.attrs.get('model_config')
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/_hl/files.py in __init__(self, name, mode,
driver, libver, userblock_size, swmr, **kwds)
267 with phil:
268 fapl = make_fapl(driver, libver, **kwds)
--> 269 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
270
271 if swmr_support:
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/_hl/files.py in make_fid(name, mode,
userblock_size, fapl, fcpl, swmr)
97 if swmr and swmr_support:
98 flags |= h5f.ACC_SWMR_READ
---> 99 fid = h5f.open(name, flags, fapl=fapl)
100 elif mode == 'r+':
101 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5f.pyx in h5py.h5f.open()
h5py/defs.pyx in h5py.defs.H5Fopen()
h5py/_errors.pyx in h5py._errors.set_exception()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 29: invalid start byte
任何幫助將不勝感激!
我通過將預訓練的 model in.h5 格式從 GCS 加載到 GC AI 平台上的 tensorflow 2 筆記本中解決了這個問題。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.