繁体   English   中英

来自 hdf5 文件的权重和偏差

[英]weights and biases from hdf5 file

我正在使用 Keras 和 Tensorflow 来训练神经网络。 通过提前停止回调,我正在保存包含权重和偏差的 hdf5 文件:

file_path = "data/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"

save_best_callback = ModelCheckpoint(file_path, monitor='val_loss', verbose=1, save_best_only=True,
                                     save_weights_only=False, mode='auto', period=1)


# model
visible = Input(shape=(36,))

x = Dense(40, activation='tanh')(visible) 
x = Dense(45, activation='tanh')(x) 
x = Dense(30, activation='tanh')(x) 
x = Dense(55, activation='tanh')(x)

output = Dense(5, activation='tanh')(x)

通常,我使用

weights_1 = model.layers[1].get_weights()[0]
biases_1 = model.layers[1].get_weights()[1]

一层。

不知何故,当我在一夜之间运行我的脚本时,权重和偏差无法保存(这是不寻常的,hdf5 文件创建失败)。 现在我有多个 hdf5 文件,我想从中选择最后一个可以保存的文件来加载我的权重和偏差。

我希望每一层的权重矩阵具有 (#cells x #inputs) 的形式,偏置矩阵具有 (#cells x 1) 的形式,而对于层 j=1 #inputs = 36 和对于 j>1 输入= #细胞(j-1)。 那么这些矩阵应该存储为 numpy arrays。

我总共有 5 层,这应该给我 5 个权重和偏置矩阵。 我尝试用 pandas 加载一个 hdf5 文件:

import numpy as np
import pandas as pd

array = np.fromfile('data/weights-improvement-446-0.00.hdf5', dtype=float)
df_array = pd.DataFrame(array)
print(df_array)

但这只是给了我一个 dataframe,它由 1 列和 m 行组成,其中一些元素是“NaN”。 谁能帮我? 提前致谢。

为什么不使用keras load_model API? 如果只是权重,请使用load_weights API。

>>> from keras.models import load_model
>>> model = load_model('data/weights-improvement-446-0.00.hdf5')
>>> for layer in model.layers:
>>>     if len(layer.weights) > 0:
>>>         print(layer.name, layer.weights[0].shape)

Function 从 hdf5 文件中读取保存的 Keras (tensorflow) 权重:

import os
import h5py
import numpy as np

def print_model_h5_wegiths(weight_file_path):
    # weights tensor is stored in the value of the Dataset, and each episode will have attrs to store the attributes of each network layer

    f = h5py.File(weight_file_path) # read weights h5 file and return File class
    try:
        if len(f.attrs.items()):
            print("{} contains: ".format(f.filename)) # weight_file_path
            print("Root attributes:")
        for key, value in f.attrs.items():
            print(" {}: {}".format(key, value))
            # Output the attrs information stored in the File class, generally the name of each layer: layer_names/backend/keras_version

        for layer, g in f.items():
            # Read the name of each layer and the Group class containing layer information
            print(" {} with Group: {}".format(layer, g)) # model_weights with Group: <HDF5 (22 members)>),
            print(" Attributes:")
            for key, value in g.attrs.items():
                # Output the attrs information stored in the Group class, generally the weights and biases of each layer and their names
                # eg ;weight_names: [b'attention_2/q_kernel:0' b'attention_2/k_kernel:0' b'attention_2/w_kernel:0']
                print(" {}: {}".format(key, value))
                #
                print(" Dataset:") # np.array(f.get(key)).shape()
            for name, d in g.items(): # Read the Dataset class that stores specific information in each layer
                print('name:', name, d)

                if str(f.filename).endswith('.weights'):
                    for k, v in d.items():
                        # Output the layer name and weight stored in the Dataset, or print the attrs of the dataset
                        # k, v embeddings:0 <HDF5 dataset "embeddings:0": shape (21, 128), type "<f4">
                        print(' {} with shape: {} or {}'.format(k, np.array(d.get(k)).shape, np.array(v).shape))
                        print(" {} have weights: {}".format(k, np.array(v))) # Weights of each layer
                        print(str(k))
                if str(f.filename).endswith('.h5'):
                    for k, v in d.items(): # v is equivalent to d.get(k)
                        print(k, v)
                        print(' {} with shape: {} or {}'.format(k, np.array(d.get(k)).shape, np.array(v).shape))
                        print(" {} have weights: {}".format(k, np.array(v))) # Weights of each layer
                        print(str(k))

                        # Adam <HDF5 group "/optimizer_weights/training/Adam" (63 members)>

    finally:
        f.close()

print('Current working path:', os.getcwd())
h5_weight = r'modelx.h5'
print_model_h5_wegiths(h5_weight)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM