[英]weights and biases from hdf5 file
I'm using Keras and Tensorflow to train a neural.network.我正在使用 Keras 和 Tensorflow 来训练神经网络。 Via the early stopping callback I am saving hdf5 files containing weights and biases:
通过提前停止回调,我正在保存包含权重和偏差的 hdf5 文件:
file_path = "data/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"
save_best_callback = ModelCheckpoint(file_path, monitor='val_loss', verbose=1, save_best_only=True,
save_weights_only=False, mode='auto', period=1)
# model
visible = Input(shape=(36,))
x = Dense(40, activation='tanh')(visible)
x = Dense(45, activation='tanh')(x)
x = Dense(30, activation='tanh')(x)
x = Dense(55, activation='tanh')(x)
output = Dense(5, activation='tanh')(x)
Normally, I use通常,我使用
weights_1 = model.layers[1].get_weights()[0]
biases_1 = model.layers[1].get_weights()[1]
for one layer.一层。
Somehow, the weights and biases could not be saved when I ran my script overnight (which is unusual, a hdf5 file failed to create).不知何故,当我在一夜之间运行我的脚本时,权重和偏差无法保存(这是不寻常的,hdf5 文件创建失败)。 Now I have multiple hdf5 files left from which I want to choose the last one that could be saved to load my weights and biases.
现在我有多个 hdf5 文件,我想从中选择最后一个可以保存的文件来加载我的权重和偏差。
I want the weight matrix of each layer to have the form (#cells x #inputs) and the bias matrix to have the form (#cells x 1), while for layer j=1 #inputs = 36 and for j>1 inputs = #cells(j-1).我希望每一层的权重矩阵具有 (#cells x #inputs) 的形式,偏置矩阵具有 (#cells x 1) 的形式,而对于层 j=1 #inputs = 36 和对于 j>1 输入= #细胞(j-1)。 Then those matrices should be stored as numpy arrays.
那么这些矩阵应该存储为 numpy arrays。
In total I have 5 layers, which should give me 5 weight and bias matrices.我总共有 5 层,这应该给我 5 个权重和偏置矩阵。 I tried loading a hdf5-file with pandas:
我尝试用 pandas 加载一个 hdf5 文件:
import numpy as np
import pandas as pd
array = np.fromfile('data/weights-improvement-446-0.00.hdf5', dtype=float)
df_array = pd.DataFrame(array)
print(df_array)
but this just gives me a dataframe consisting of 1 column and m rows, where some elements are 'NaN'.但这只是给了我一个 dataframe,它由 1 列和 m 行组成,其中一些元素是“NaN”。 Can anyone help me?
谁能帮我? Thanks in advance.
提前致谢。
Why not use keras load_model API? 为什么不使用keras load_model API? If it's only the weights, use the load_weights API.
如果只是权重,请使用load_weights API。
>>> from keras.models import load_model
>>> model = load_model('data/weights-improvement-446-0.00.hdf5')
>>> for layer in model.layers:
>>> if len(layer.weights) > 0:
>>> print(layer.name, layer.weights[0].shape)
Function to read saved Keras (tensorflow) weights from hdf5 file: Function 从 hdf5 文件中读取保存的 Keras (tensorflow) 权重:
import os
import h5py
import numpy as np
def print_model_h5_wegiths(weight_file_path):
# weights tensor is stored in the value of the Dataset, and each episode will have attrs to store the attributes of each network layer
f = h5py.File(weight_file_path) # read weights h5 file and return File class
try:
if len(f.attrs.items()):
print("{} contains: ".format(f.filename)) # weight_file_path
print("Root attributes:")
for key, value in f.attrs.items():
print(" {}: {}".format(key, value))
# Output the attrs information stored in the File class, generally the name of each layer: layer_names/backend/keras_version
for layer, g in f.items():
# Read the name of each layer and the Group class containing layer information
print(" {} with Group: {}".format(layer, g)) # model_weights with Group: <HDF5 (22 members)>),
print(" Attributes:")
for key, value in g.attrs.items():
# Output the attrs information stored in the Group class, generally the weights and biases of each layer and their names
# eg ;weight_names: [b'attention_2/q_kernel:0' b'attention_2/k_kernel:0' b'attention_2/w_kernel:0']
print(" {}: {}".format(key, value))
#
print(" Dataset:") # np.array(f.get(key)).shape()
for name, d in g.items(): # Read the Dataset class that stores specific information in each layer
print('name:', name, d)
if str(f.filename).endswith('.weights'):
for k, v in d.items():
# Output the layer name and weight stored in the Dataset, or print the attrs of the dataset
# k, v embeddings:0 <HDF5 dataset "embeddings:0": shape (21, 128), type "<f4">
print(' {} with shape: {} or {}'.format(k, np.array(d.get(k)).shape, np.array(v).shape))
print(" {} have weights: {}".format(k, np.array(v))) # Weights of each layer
print(str(k))
if str(f.filename).endswith('.h5'):
for k, v in d.items(): # v is equivalent to d.get(k)
print(k, v)
print(' {} with shape: {} or {}'.format(k, np.array(d.get(k)).shape, np.array(v).shape))
print(" {} have weights: {}".format(k, np.array(v))) # Weights of each layer
print(str(k))
# Adam <HDF5 group "/optimizer_weights/training/Adam" (63 members)>
finally:
f.close()
print('Current working path:', os.getcwd())
h5_weight = r'modelx.h5'
print_model_h5_wegiths(h5_weight)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.