简体   繁体   English

验证准确性并未提高 CNN

[英]Validation Accuracy not improving CNN

I am fairly new to deep learning and right now am trying to predict consumer choices based on EEG data.我对深度学习还很陌生,现在正尝试根据 EEG 数据预测消费者的选择。 The total dataset consists of 1045 EEG recordings each with a corresponding label, indicating Like or Dislike for a product.整个数据集包含 1045 个 EEG 记录,每个记录对应一个 label,表示对产品的喜欢或不喜欢。 Classes are distributed as follows (44% Likes and 56% Dislikes).课程分布如下(44% 喜欢,56% 不喜欢)。 I read that Convolutional Neural Networks are suitable to work with raw EEG data so I tried to implement a.network based on keras with the following structure:我读到卷积神经网络适合处理原始 EEG 数据,因此我尝试使用以下结构实现基于 keras 的 a.network:

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(full_data, target, test_size=0.20, random_state=42)

y_train = np.asarray(y_train).astype('float32').reshape((-1,1))
y_test = np.asarray(y_test).astype('float32').reshape((-1,1))


# X_train.shape = ((836, 512, 14))
# y_train.shape = ((836, 1))

from keras.optimizers import Adam
from keras.optimizers import SGD
from keras.layers import MaxPooling1D
model = Sequential()

model.add(Conv1D(16, kernel_size=3, activation="relu", input_shape=(512,14)))

model.add(MaxPooling1D())

model.add(Conv1D(8, kernel_size=3, activation="relu"))

model.add(MaxPooling1D())

model.add(Flatten())

model.add(Dense(1, activation="sigmoid"))

model.compile(optimizer=Adam(lr = 0.001), loss='binary_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size = 64)

When I fit the model however the validation accuracy does not change at all with the following output:但是,当我适合 model 时,以下 output 的验证准确性根本没有改变:


Epoch 1/20
14/14 [==============================] - 0s 32ms/step - loss: 292.6353 - accuracy: 0.5383 - val_loss: 0.7884 - val_accuracy: 0.5407
Epoch 2/20
14/14 [==============================] - 0s 7ms/step - loss: 1.3748 - accuracy: 0.5598 - val_loss: 0.8860 - val_accuracy: 0.5502
Epoch 3/20
14/14 [==============================] - 0s 6ms/step - loss: 1.0537 - accuracy: 0.5598 - val_loss: 0.7629 - val_accuracy: 0.5455
Epoch 4/20
14/14 [==============================] - 0s 6ms/step - loss: 0.8827 - accuracy: 0.5598 - val_loss: 0.7010 - val_accuracy: 0.5455
Epoch 5/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7988 - accuracy: 0.5598 - val_loss: 0.8689 - val_accuracy: 0.5407
Epoch 6/20
14/14 [==============================] - 0s 6ms/step - loss: 1.0221 - accuracy: 0.5610 - val_loss: 0.6961 - val_accuracy: 0.5455
Epoch 7/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7415 - accuracy: 0.5598 - val_loss: 0.6945 - val_accuracy: 0.5455
Epoch 8/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7381 - accuracy: 0.5574 - val_loss: 0.7761 - val_accuracy: 0.5455
Epoch 9/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7326 - accuracy: 0.5598 - val_loss: 0.6926 - val_accuracy: 0.5455
Epoch 10/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7338 - accuracy: 0.5598 - val_loss: 0.6917 - val_accuracy: 0.5455
Epoch 11/20
14/14 [==============================] - 0s 7ms/step - loss: 0.7203 - accuracy: 0.5610 - val_loss: 0.6916 - val_accuracy: 0.5455
Epoch 12/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7192 - accuracy: 0.5610 - val_loss: 0.6914 - val_accuracy: 0.5455
Epoch 13/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7174 - accuracy: 0.5610 - val_loss: 0.6912 - val_accuracy: 0.5455
Epoch 14/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7155 - accuracy: 0.5610 - val_loss: 0.6911 - val_accuracy: 0.5455
Epoch 15/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7143 - accuracy: 0.5610 - val_loss: 0.6910 - val_accuracy: 0.5455
Epoch 16/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7129 - accuracy: 0.5610 - val_loss: 0.6909 - val_accuracy: 0.5455
Epoch 17/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7114 - accuracy: 0.5610 - val_loss: 0.6907 - val_accuracy: 0.5455
Epoch 18/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7103 - accuracy: 0.5610 - val_loss: 0.6906 - val_accuracy: 0.5455
Epoch 19/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7088 - accuracy: 0.5610 - val_loss: 0.6906 - val_accuracy: 0.5455
Epoch 20/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7075 - accuracy: 0.5610 - val_loss: 0.6905 - val_accuracy: 0.5455

Thanks in advance for any insights!提前感谢您的任何见解!

The phenomenon you run into is called underfitting .您遇到的现象称为underfitting This happens when the amount our quality of your training data is insufficient, or your.network architecture is too small and not capable to learn the problem.当我们的训练数据质量不足,或者您的网络架构太小而无法学习问题时,就会发生这种情况。

Try normalizing your input data and experiment with different.network architectures, learning rates and activation functions.尝试规范化您的输入数据并尝试不同的网络架构、学习率和激活函数。

As @Muhammad Shahzad stated in his comment, adding some Dense Layers after flatting would be a concrete architecture adaption you should try.正如@Muhammad Shahzad 在他的评论中所说,在扁平化之后添加一些密集层将是您应该尝试的具体架构调整。

You can also increase the epoch and must increase the data set.您还可以增加纪元并且必须增加数据集。 And you also can use-你也可以使用 -

train_datagen= ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True,
        vertical_flip = True,
        channel_shift_range=0.2,
        fill_mode='nearest'
        )

for feeding the model more data and I hope you can increase the validation_accuracy.为 model 提供更多数据,我希望您可以提高 validation_accuracy。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM