簡體   English   中英

如何在 keras 中存儲每個時期的操作結果(如 TOPK)

[英]How to store result of an operation (like TOPK) per epoch in keras

我在 keras 中編寫了一個自定義層。 在這個自定義層的一部分可以說我有一個這樣的矩陣:

c = tf.cast(tf.nn.top_k(tf.nn.top_k(n, tf.shape(n)[1])[1][:, ::-1], tf.shape(n)[1])[1][:, ::-1], dtype=tf.float32)

我的問題是如何跟蹤每個時代的結果值?

例如,如果我有 20 個 epoch,我需要將 20 個該矩陣保存在csv文件中。

(我知道如何保存 model 的權重,但這是中間層操作的結果,我需要跟蹤這個矩陣)。

我做了什么:

這是我的層的結構:

class my_layer(Layer):
    def __init__(self, topk, ctype, **kwargs):
    self.x_prev = None
    self.topk_mat = None

   def call(self, x):
     'blah blah'

   def get_config(self):
      'blah blah'

   def k_comp_tanh(self,x, f=6):
     'blah blah'
      if self.topk_mat is None:
            self.topk_mat = self.add_weight(shape=(20, 25),
                                          initializer='zeros',
                                          trainable=False,
                                          # dtype=tf.float32,
                                          name='topk_mat')

     c = tf.cast(tf.nn.top_k(tf.nn.top_k(n, tf.shape(n)[1])[1][:, ::-1], tf.shape(n)[1])[1][:, ::-1], dtype=tf.float32)
     self.topk_mat.assign(c)

用於構建 model 並擬合數據的代碼:

class AutoEncoder(object):
def __init__(self, input_size, dim, comp_topk=None, ctype=None, save_model='best_model'):
    self.input_size = input_size
    self.dim = dim
    self.comp_topk = comp_topk
    self.ctype = ctype
    self.save_model = save_model
    self.build()

def build(self):
    input_layer = Input(shape=(self.input_size,))
    encoded_layer = Dense(self.dim, activation=act, kernel_initializer="glorot_normal", name="Encoded_Layer")
    encoded = encoded_layer(input_layer)
    encoder_model = Model(outputs=encoded, inputs=input_layer)
    encoder_model.save('pathto/encoder_model')

    self.encoded_instant = my_layer(self.comp_topk, self.ctype)
    encoded = self.encoded_instant(encoded)
    decoded = Dense_tied(self.input_size, activation='sigmoid',tied_to=encoded_layer, name='Decoded_Layer')(encoded)

    # this model maps an input to its reconstruction
    self.autoencoder = Model(outputs=decoded, inputs=input_layer)

    # this model maps an input to its encoded representation
    self.encoder = Model(outputs=encoded, inputs=input_layer)

    # create a placeholder for an encoded input
    encoded_input = Input(shape=(self.dim,))
    # retrieve the last layer of the autoencoder model
    decoder_layer = self.autoencoder.layers[-1]
    # create the decoder model
    self.decoder = Model(outputs=decoder_layer(encoded_input), inputs=encoded_input)

def fit(self, train_X, val_X, nb_epoch=50, batch_size=100, contractive=None):
    import tensorflow as tf
    optimizer = Adam(lr=0.0005)

    self.autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy') # kld, binary_crossentropy, mse

    cbk = tf.keras.callbacks.LambdaCallback(
        on_epoch_begin=lambda epoch, logs: np.savetxt("foo.csv", tf.keras.backend.eval(self.encoded_instant.topk_mat), delimiter=","))
    self.autoencoder.fit(train_X[0], train_X[1],
                    epochs=nb_epoch,
                    batch_size=batch_size,
                    shuffle=True,
                    validation_data=(val_X[0], val_X[1]),
                    callbacks=[
                                ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=3, min_lr=0.01),
                                EarlyStopping(monitor='val_loss', min_delta=1e-5, patience=5, verbose=1, mode='auto'),
                                cbk,
                  save_best_only=True, mode='auto')
                                CustomModelCheckpoint(custom_model=self.encoder, filepath="pathtocheckpoint/{epoch}.hdf5",save_best_only=True,  monitor='val_loss', mode='auto')
                    ]
                    )

    return self


cbk = tf.keras.callbacks.LambdaCallback(
    on_epoch_begin=lambda epoch, logs: np.savetxt("mycsvtopk.csv", tf.keras.backend.eval(my_layer.topk_mat, delimiter=",")))
                                       )
self.autoencoder.fit(train_X[0], train_X[1],
                epochs=nb_epoch,
                batch_size=batch_size,
                shuffle=True,
                validation_data=(val_X[0], val_X[1]),
                callbacks=[cbk,CustomModelCheckpoint(custom_model=self.encoder, filepath="path_to_file/{epoch}.hdf5",save_best_only=True,  monitor='val_loss', mode='auto')
                    ]
                    )
 

這就是我稱之為自動編碼器Autoencoder的地方

ae = AutoEncoder(n_vocab, args.n_dim, comp_topk=args.comp_topk, ctype=args.ctype, save_model=args.save_model)
ae.fit([X_train_noisy, X_train], [X_val_noisy, X_val], nb_epoch=args.n_epoch, \
        batch_size=args.batch_size, contractive=args.contractive)

它引發錯誤:

tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value mylayer_1/topk_mat
     [[{{node _retval_mylayer_1/topk_mat_0_0}} = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](mylayer_1/topk_mat)]]
Exception TypeError: TypeError("'NoneType' object is not callable",) in <bound method Session.__del__ of <tensorflow.python.client.session.Session object at 0x7f56ae01bc50>> ignored

我在 CustomCallback 中看到的所有示例都與指標相關,model 已經意識到損失、准確性……我上面基於@Jhadi 的想法所做的是將結果保存在一個最初用 None 初始化的變量中,並且然后在配件部分傳遞此變量以將其保存為 csv 格式。 盡管我收到此錯誤並且嘗試了多種方法來修復它但沒有成功,但這似乎必須起作用。 在我看來,這就像Keras library issue

我認為您可以使用列表跟蹤 Checkpoint保存變量。

您需要在訓練中添加代碼,因此您需要編寫訓練循環並在每個時期結束時保存變量。

def fit_and_save_log(self, train_X, val_X, nb_epoch=50, batch_size=100, contractive=None):
    import tensorflow as tf
    optimizer = Adam(lr=0.0005)

    self.autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy') # kld, binary_crossentropy, mse   
    
    save = tf.train.Checkpoint()
    save.listed = []
    
    # Prepare dataset
    X, y = train_X
    train_ds = tf.data.Dataset.from_tensor_slices((x, y))
    train_ds = train_ds.shuffle(10000)
    train_ds = train_ds.batch(batch_size)
    iterator = train_ds.make_initializable_iterator()
    next_batch = iterator.get_next()

    for epoch in range(nb_epoch):
        sess.run(iterator.initializer)           
        
        while True:
            try:
                self.autoencoder.train_on_batch(next_batch[0], next_batch[1])
            except tf.errors.OutOfRangeError:
                break
        
        save.listed.append(self.encoded_instant.topk_mat)

        # you can compute validation results here 

    save_path = save.save('./topk_mat_log', session=tf.keras.backend.get_session())
    return self

或者,如果您願意,可以使用model.fit function。 這樣做會更容易,因為我們不需要關心創建批次。 但是,重復調用model.fit可能會導致 memory 泄漏。 您可以試一試並檢查它的行為方式。 [1]

def fit_and_save_log(self, train_X, val_X, nb_epoch=50, batch_size=100, contractive=None):
    import tensorflow as tf
    optimizer = Adam(lr=0.0005)

    self.autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy') # kld, binary_crossentropy, mse   
    
    save = tf.train.Checkpoint()
    save.listed = []
    
    for epoch in range(nb_epoch):
        self.autoencoder.fit(train_X[0], train_X[1],
                epochs=1,
                batch_size=batch_size,
                shuffle=True,
                validation_data=(val_X[0], val_X[1]))
        
        save.listed.append(self.encoded_instant.topk_mat)

        # you can compute validation results here 

    save_path = save.save('./topk_mat_log', session=tf.keras.backend.get_session())
    return self

然后你可以像這樣恢復保存的變量

restore = tf.train.Checkpoint()
restore.restore(save_path)
restore.listed = []
v1 = tf.Variable(0.)
restore.listed.append(v1) # Now v1 corresponds with topk_mat in the first epoch

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM