简体   繁体   English

Keras:错误的训练时期

[英]Keras: Wrong Number of Training Epochs

I'm trying to build a class to quickly initialize and train an autoencoder for rapid prototyping. 我正在尝试建立一个类来快速初始化和训练用于快速原型制作的自动编码器。 One thing I'd like to be able to do is quickly adjust the number of epochs I train for. 我想做的一件事是迅速调整训练的时期数。 However, it seems like no matter what I do, the model trains each layer for 100 epochs! 但是,无论我做什么,该模型都将每层训练100个时代! I'm using the tensorflow backend. 我正在使用tensorflow后端。

Here is the code from the two offending methods. 这是两个令人反感的方法的代码。

    def pretrain(self, X_train, nb_epoch = 10):
    data = X_train
    for ae in self.pretrains:            
        ae.fit(data, data, nb_epoch = nb_epoch)
        ae.layers[0].output_reconstruction = False
        ae.compile(optimizer='sgd', loss='mse')
        data = ae.predict(data)

.........

    def fine_train(self, X_train, nb_epoch):
    weights = [ae.layers[0].get_weights() for ae in self.pretrains]

    dims = self.dims
    encoder = containers.Sequential()
    decoder = containers.Sequential()

    ## add special input encoder
    encoder.add(Dense(output_dim = dims[1], input_dim = dims[0], 
        weights = weights[0][0:2], activation = 'linear'))

    ## add the rest of the encoders
    for i in range(1, len(dims) - 1):
        encoder.add(Dense(output_dim = dims[i+1],
            weights = weights[i][0:2], activation = self.act))

    ## add the decoders from the end

    decoder.add(Dense(output_dim = dims[len(dims) - 2], input_dim = dims[len(dims) - 1],
        weights = weights[len(dims) - 2][2:4], activation = self.act))

    for i in range(len(dims) - 2, 1, -1):
        decoder.add(Dense(output_dim = dims[i - 1],
            weights = weights[i-1][2:4], activation = self.act))

    ## add the output layer decoder
    decoder.add(Dense(output_dim = dims[0], 
        weights = weights[0][2:4], activation = 'linear'))



    masterAE = AutoEncoder(encoder = encoder, decoder = decoder)
    masterModel = models.Sequential()
    masterModel.add(masterAE)
    masterModel.compile(optimizer = 'sgd', loss = 'mse')
    masterModel.fit(X_train, X_train, nb_epoch = nb_epoch)
    self.model = masterModel

Any suggestions on how to fix the problem would be appreciated. 任何有关如何解决该问题的建议,将不胜感激。 My original suspicion was that it was something to do with tensorflow, so I tried running with the theano backend but encountered the same problem. 我最初的怀疑是这与张量流有关,因此我尝试使用theano后端运行,但遇到了相同的问题。

Here is a link to the full program. 这里是完整程序的链接。

Following the Keras doc , the fit method uses a default of 100 training epochs ( nb_epoch=100 ): 根据Keras文档fit方法使用默认的100个训练纪元( nb_epoch=100 ):

fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)

I'm sure how you are running these methods, but following the "Typical usage" from the original code , you should be able to run something like (adjusting the variable num_epoch as required): 我确定您是如何运行这些方法的,但是按照原始代码中的“典型用法”,您应该可以运行类似的命令(根据需要调整变量num_epoch ):

#Typical usage:
num_epoch = 10
ae = JPAutoEncoder(dims)
ae.pretrain(X_train, nb_epoch = num_epoch)
ae.train(X_train, nb_epoch = num_epoch)
ae.predict(X_val)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM