[英]How to load pretrained Tensorflow model from Google Cloud Storage into Datalab
我已经在 Tensorflow (v2.0) Keras 本地训练了 model。 我现在需要将这个预训练的 model 上传到 Google Datalab 中,以便对大量数据进行预测。 Datalab 上可用的 Tenserflow 版本是 1.8,但我假设向后兼容。
我已将保存的 model (model.h5) 上传到 Google Cloud Storage。 我尝试将它加载到 Datalab 中的 Jupyter Notebook 中,如下所示:
%%gcs read --object gs://project-xxx/data/saved_model.h5 --variable ldmodel
model = keras.models.load_model(ldmodel)
这会引发错误:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-18-07c40785a14b> in <module>()
----> 1 model = keras.models.load_model(ldmodel)
/usr/local/envs/py3env/lib/python3.5/site-
packages/tensorflow/python/keras/_impl/keras/engine/saving.py in load_model(filepath,
custom_objects,
compile)
233 return obj
234
--> 235 with h5py.File(filepath, mode='r') as f:
236 # instantiate model
237 model_config = f.attrs.get('model_config')
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/_hl/files.py in __init__(self, name, mode,
driver, libver, userblock_size, swmr, **kwds)
267 with phil:
268 fapl = make_fapl(driver, libver, **kwds)
--> 269 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
270
271 if swmr_support:
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/_hl/files.py in make_fid(name, mode,
userblock_size, fapl, fcpl, swmr)
97 if swmr and swmr_support:
98 flags |= h5f.ACC_SWMR_READ
---> 99 fid = h5f.open(name, flags, fapl=fapl)
100 elif mode == 'r+':
101 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5f.pyx in h5py.h5f.open()
h5py/defs.pyx in h5py.defs.H5Fopen()
h5py/_errors.pyx in h5py._errors.set_exception()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 29: invalid start byte
任何帮助将不胜感激!
我通过将预训练的 model in.h5 格式从 GCS 加载到 GC AI 平台上的 tensorflow 2 笔记本中解决了这个问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.