简体   繁体   中英

Autoencoder Gridsearch Hyperparameter tuning Keras

My data shape is the same, I just generated here random numbers. In real the datas are float numbers from range -6 to 6, I scaled them as well. The Input layer size and Encoding dimension have to remain the same. When I am training the loss starts and stays at 0.631 all the time. I changed the learning rate manually. I am new to python and do not know to implement to a grid search to this code to find the right parameters. What else can I do to tune my network ?

import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
from keras import optimizers

#Train data
x_train=np.random.rand(2666000)
x_train = (train-min(train))/(max(train)-min(train))
x_train=x_train.reshape(-1,2000)

x_test=[]#empty testing later
#Enc Dimension 
encoding_dim=100
#Input shape
input_dim = Input(shape=(2000,))
#Encoding Layer
encoded = Dense(encoding_dim, activation='relu')(input_dim)
#Decoding Layer
decoded = Dense(2000, activation='sigmoid')(encoded)

#Model AE
autoencoder = Model(input_dim, decoded)
#Model Encoder 
encoder = Model(input_dim, encoded)
#Encoding
encoded_input = Input(shape=(encoding_dim,))
#Decoding 
decoder_layer = autoencoder.layers[-1]
#Model Decoder 
decoder = Model(encoded_input, decoder_layer(encoded_input))

optimizers.Adadelta(lr=0.1, rho=0.95, epsilon=None, decay=0.0)
autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy', 
                metrics=['accuracy'])
#Train and test 
autoencoder_train= autoencoder.fit(x_train, x_train,
            epochs=epochs, shuffle=False, batch_size=2048)

I suggest adding more hidden layers. If your loss stays the same it means at least one of two things:

  • Your data is more or less random and there are no relationships to be drawn

  • Your model is not complex enough to learn meaningful relationships from your data

A rule of thumb for me is that a model should be powerful enough to overfit the data given enough training iterations.

Unfortunately there is a fine line between sufficiently complex and too complex. You have to play around with the number of hidden layers, the number of units in each layer, and the amount of epochs you take to train your network. Since you only have two Dense layers, a good starting point would be to increase model complexity.

在此处输入图片说明

If you insist on using a grid search keras has a wrapper for scikit_learn and sklearn has a grid search module. A toy example:

from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV

def create_model():
    <return a compiled but untrained keras model>

model = KerasClassifier(build_fn = create_model, batch_size=1000, epochs=10)
#now write out all the parameters you want to try out for the grid search
activation = ['relu', 'tanh', 'sigmoid'...]
learn_rate = [0.1, 0.2, ...]
init = ['unform', 'normal', 'zero', ...]
optimizer = ['SGD', 'Adam' ...]
param_grid = dict(activation=activation, learn_rate=learn_rate, init=init, optimizer=optimizer)
grid = GridSearchCV(estimator=model, param_grid=param_grid)
result = grid.fit(X, y)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM