简体   繁体   中英

Neural network output exactly correlated with output layer bias weights (Keras)

I'm implementing a neural network in Keras and after several rounds of training, the network output becomes completely correlated with the bias weights in the output layer. Because of this, the network output is the same regardless of the input. This data has previously produced good results but I've done something to cause this problem. One of the changes I made was to make the network easier to adjust and I instantiate it like:

layers = [40, 8, 4]    
model = Sequential()
model.add(Dense(layers[0], input_dim=np.shape(train_x_scaled)[1], activation='relu'))
for layer_size in layers[1:]:
    model.add(Dense(layer_size, activation='relu'))
model.add(Dense(np.shape(train_y_scaled)[1], activation='sigmoid'))
optimizer = optimizers.Adam(lr=0.01)
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mse']) 
history = model.fit(train_x_scaled, train_y_scaled, epochs=iters)

The training input is of shape (1513, 3048) and the target is of shape (1513, 254).

I think I found my problem. My network shape was 3048, 40, 8, 4, 254. By necking down so small just prior to the output, it limited the decisions that could be described and the network just learned to ignore the input. I did have one successful training run with this shape so I suspect I was just lucky with the weight initialization in that instance. I changed the network to 3048, 40, 8, 40, 254 I was able to get useful training done again.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM