簡體   English   中英

來自 hdf5 文件的權重和偏差

[英]weights and biases from hdf5 file

我正在使用 Keras 和 Tensorflow 來訓練神經網絡。 通過提前停止回調,我正在保存包含權重和偏差的 hdf5 文件:

file_path = "data/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"

save_best_callback = ModelCheckpoint(file_path, monitor='val_loss', verbose=1, save_best_only=True,
                                     save_weights_only=False, mode='auto', period=1)


# model
visible = Input(shape=(36,))

x = Dense(40, activation='tanh')(visible) 
x = Dense(45, activation='tanh')(x) 
x = Dense(30, activation='tanh')(x) 
x = Dense(55, activation='tanh')(x)

output = Dense(5, activation='tanh')(x)

通常,我使用

weights_1 = model.layers[1].get_weights()[0]
biases_1 = model.layers[1].get_weights()[1]

一層。

不知何故,當我在一夜之間運行我的腳本時,權重和偏差無法保存(這是不尋常的,hdf5 文件創建失敗)。 現在我有多個 hdf5 文件,我想從中選擇最后一個可以保存的文件來加載我的權重和偏差。

我希望每一層的權重矩陣具有 (#cells x #inputs) 的形式,偏置矩陣具有 (#cells x 1) 的形式,而對於層 j=1 #inputs = 36 和對於 j>1 輸入= #細胞(j-1)。 那么這些矩陣應該存儲為 numpy arrays。

我總共有 5 層,這應該給我 5 個權重和偏置矩陣。 我嘗試用 pandas 加載一個 hdf5 文件:

import numpy as np
import pandas as pd

array = np.fromfile('data/weights-improvement-446-0.00.hdf5', dtype=float)
df_array = pd.DataFrame(array)
print(df_array)

但這只是給了我一個 dataframe,它由 1 列和 m 行組成,其中一些元素是“NaN”。 誰能幫我? 提前致謝。

為什么不使用keras load_model API? 如果只是權重,請使用load_weights API。

>>> from keras.models import load_model
>>> model = load_model('data/weights-improvement-446-0.00.hdf5')
>>> for layer in model.layers:
>>>     if len(layer.weights) > 0:
>>>         print(layer.name, layer.weights[0].shape)

Function 從 hdf5 文件中讀取保存的 Keras (tensorflow) 權重:

import os
import h5py
import numpy as np

def print_model_h5_wegiths(weight_file_path):
    # weights tensor is stored in the value of the Dataset, and each episode will have attrs to store the attributes of each network layer

    f = h5py.File(weight_file_path) # read weights h5 file and return File class
    try:
        if len(f.attrs.items()):
            print("{} contains: ".format(f.filename)) # weight_file_path
            print("Root attributes:")
        for key, value in f.attrs.items():
            print(" {}: {}".format(key, value))
            # Output the attrs information stored in the File class, generally the name of each layer: layer_names/backend/keras_version

        for layer, g in f.items():
            # Read the name of each layer and the Group class containing layer information
            print(" {} with Group: {}".format(layer, g)) # model_weights with Group: <HDF5 (22 members)>),
            print(" Attributes:")
            for key, value in g.attrs.items():
                # Output the attrs information stored in the Group class, generally the weights and biases of each layer and their names
                # eg ;weight_names: [b'attention_2/q_kernel:0' b'attention_2/k_kernel:0' b'attention_2/w_kernel:0']
                print(" {}: {}".format(key, value))
                #
                print(" Dataset:") # np.array(f.get(key)).shape()
            for name, d in g.items(): # Read the Dataset class that stores specific information in each layer
                print('name:', name, d)

                if str(f.filename).endswith('.weights'):
                    for k, v in d.items():
                        # Output the layer name and weight stored in the Dataset, or print the attrs of the dataset
                        # k, v embeddings:0 <HDF5 dataset "embeddings:0": shape (21, 128), type "<f4">
                        print(' {} with shape: {} or {}'.format(k, np.array(d.get(k)).shape, np.array(v).shape))
                        print(" {} have weights: {}".format(k, np.array(v))) # Weights of each layer
                        print(str(k))
                if str(f.filename).endswith('.h5'):
                    for k, v in d.items(): # v is equivalent to d.get(k)
                        print(k, v)
                        print(' {} with shape: {} or {}'.format(k, np.array(d.get(k)).shape, np.array(v).shape))
                        print(" {} have weights: {}".format(k, np.array(v))) # Weights of each layer
                        print(str(k))

                        # Adam <HDF5 group "/optimizer_weights/training/Adam" (63 members)>

    finally:
        f.close()

print('Current working path:', os.getcwd())
h5_weight = r'modelx.h5'
print_model_h5_wegiths(h5_weight)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM