简体   繁体   中英

Model gives same output, accuracy, loss for all inputs (keras)

This my model for udacity self-driving car!

model = Sequential()
model.add(Lambda(lambda x: x/127.5-1.0, input_shape=(64,64,3)))
model.add(Conv2D(3, 1, 1, activation="elu"))
model.add(Conv2D(32, 3, 3, activation='elu'))
model.add(Conv2D(32, 3, 3, activation='elu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Dropout(0.5))
model.add(Conv2D(64, 3, 3, activation='elu'))
model.add(Conv2D(64, 3, 3, activation='elu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Dropout(0.5))
model.add(Conv2D(128, 3, 3, activation='elu'))
model.add(Conv2D(128, 3, 3, activation='elu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(512, activation="elu"))
model.add(Dense(64, activation="elu"))
model.add(Dense(16, activation="elu"))
model.add(Dense(1, activation="softmax"))
model.summary()

I am using adam compiler to compile the model

from keras.optimizers import Adam
model.compile(optimizer=Adam(lr=0.0001),loss='mean_squared_error',metrics='accuracy'])
model.fit(X_train, y_train, batch_size=256, epochs=250, shuffle = True,  validation_split=0.2)

I have tried for every batch size and epoch combination, but the result seems to be the same. I am taking 12000 images initially for training and testing the model. My problem accuracy is very low and constant through out the epochs. Also it predicts the same output for every pre-processed image. (ps: I have preprocessed the images before training). Here the sample output that showing constant accuracy and loss(that too very low).

 Train on 8084 samples, validate on 2021 samples
    Epoch 1/250
    8084/8084 [==============================] - 8s 1ms/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 2/250
    8084/8084 [==============================] - 6s 763us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 3/250
    8084/8084 [==============================] - 6s 779us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 4/250
    8084/8084 [==============================] - 6s 779us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 5/250
    8084/8084 [==============================] - 6s 790us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 6/250
    8084/8084 [==============================] - 6s 770us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 7/250
    8084/8084 [==============================] - 6s 739us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 8/250
    8084/8084 [==============================] - 6s 735us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 9/250
    8084/8084 [==============================] - 6s 724us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 10/250
    8084/8084 [==============================] - 6s 727us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015
    Epoch 11/250
    8084/8084 [==============================] - 6s 729us/step - loss: 1.0467 - acc: 0.0014 - val_loss: 1.0666 - val_acc: 0.0015

Please help.Thank you

Your model doesn't learn anything because you used a softmax activation with one output neuron, which means that the output is constant 1.0 with any values for the weights.

You should change activation to hyperbolic tangent (tanh) since it matches the range of your output [-1, 1]. You should also remove the accuracy metric since its a regression task, and accuracy only applies to classification.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM