繁体   English   中英

Keras 1D CNN 总是预测相同的结果,即使训练集的准确度很高

[英]Keras 1D CNN always predicts the same result even if accuracy is high on training set

我的 1D CNN 的验证准确度停留在 0.5,这是因为我总是从平衡的数据集中得到相同的预测。 同时,我的训练准确度不断提高,损失按预期减少。

奇怪的是,如果我在我的训练集上执行model.evaluate() (在最后一个时期的准确度接近 1),准确度也将是 0.5。 这里的准确率与上一个 epoch 的训练准确率有何不同? 我还尝试使用 1 的批量大小进行训练和评估,但问题仍然存在。

好吧,我一直在寻找不同的解决方案,但仍然没有运气。 我已经调查过的可能问题:

  1. 我的数据集得到了适当的平衡和洗牌;
  2. 我的标签是正确的;
  3. 尝试添加全连接层;
  4. 尝试从全连接层添加/删除 dropout;
  5. 尝试了相同的架构,但最后一层有 1 个神经元和 sigmoid 激活;
  6. 尝试改变学习率(下降到 0.0001 但仍然是同样的问题)。

这是我的代码:

import pathlib
import numpy as np
import ipynb.fs.defs.preprocessDataset as preprocessDataset
import pickle
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras import Input
from tensorflow.keras.layers import Conv1D, BatchNormalization, Activation, MaxPooling1D, Flatten, Dropout, Dense
from tensorflow.keras.optimizers import SGD

main_folder = pathlib.Path.cwd().parent
datasetsFolder=f'{main_folder}\\datasets'
trainDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Train',datasetsFolder)
testDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Test',datasetsFolder)

X_train,Y_train,Names_train=trainDataset[0],trainDataset[1],trainDataset[2]
X_test,Y_test,Names_test=testDataset[0],testDataset[1],testDataset[2]

model = Sequential()

model.add(Input(shape=X_train.shape[1:]))

model.add(Conv1D(16, 61, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(32, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dropout(0.5))

model.add(Dense(200))
model.add(Activation('relu'))

model.add(Dense(2))
model.add(Activation('softmax'))

opt = SGD(learning_rate=0.01)

model.compile(loss='binary_crossentropy',optimizer=opt,metrics=['accuracy'])

model.summary()

model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))

model.evaluate(X_train,Y_train)

这是 model.fit():

model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))

Epoch 1/10
914/914 [==============================] - 277s 300ms/step - loss: 0.6405 - accuracy: 0.6543 - val_loss: 7.9835 - val_accuracy: 0.5000
Epoch 2/10
914/914 [==============================] - 270s 295ms/step - loss: 0.3997 - accuracy: 0.8204 - val_loss: 19.8981 - val_accuracy: 0.5000
Epoch 3/10
914/914 [==============================] - 273s 298ms/step - loss: 0.2976 - accuracy: 0.8730 - val_loss: 1.9558 - val_accuracy: 0.5002
Epoch 4/10
914/914 [==============================] - 278s 304ms/step - loss: 0.2897 - accuracy: 0.8776 - val_loss: 20.2678 - val_accuracy: 0.5000
Epoch 5/10
914/914 [==============================] - 277s 303ms/step - loss: 0.2459 - accuracy: 0.8991 - val_loss: 5.4945 - val_accuracy: 0.5000
Epoch 6/10
914/914 [==============================] - 268s 294ms/step - loss: 0.2008 - accuracy: 0.9181 - val_loss: 32.4579 - val_accuracy: 0.5000
Epoch 7/10
914/914 [==============================] - 271s 297ms/step - loss: 0.1695 - accuracy: 0.9317 - val_loss: 14.9538 - val_accuracy: 0.5000
Epoch 8/10
914/914 [==============================] - 276s 302ms/step - loss: 0.1423 - accuracy: 0.9452 - val_loss: 1.4420 - val_accuracy: 0.4988
Epoch 9/10
914/914 [==============================] - 266s 291ms/step - loss: 0.1261 - accuracy: 0.9497 - val_loss: 4.3830 - val_accuracy: 0.5005
Epoch 10/10
914/914 [==============================] - 272s 297ms/step - loss: 0.1142 - accuracy: 0.9548 - val_loss: 1.6054 - val_accuracy: 0.5009

这是 model.evaluate():

model.evaluate(X_train,Y_train)

914/914 [==============================] - 35s 37ms/step - loss: 1.7588 - accuracy: 0.5009

这是 model.summary():

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 4096, 16)          992       
_________________________________________________________________
batch_normalization (BatchNo (None, 4096, 16)          64        
_________________________________________________________________
activation (Activation)      (None, 4096, 16)          0         
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 2048, 16)          0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 2048, 32)          1568      
_________________________________________________________________
batch_normalization_1 (Batch (None, 2048, 32)          128       
_________________________________________________________________
activation_1 (Activation)    (None, 2048, 32)          0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 1024, 32)          0         
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 1024, 64)          6208      
_________________________________________________________________
batch_normalization_2 (Batch (None, 1024, 64)          256       
_________________________________________________________________
activation_2 (Activation)    (None, 1024, 64)          0         
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 512, 64)           0         
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 512, 64)           12352     
_________________________________________________________________
batch_normalization_3 (Batch (None, 512, 64)           256       
_________________________________________________________________
activation_3 (Activation)    (None, 512, 64)           0         
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 256, 64)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 256, 64)           12352     
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 64)           256       
_________________________________________________________________
activation_4 (Activation)    (None, 256, 64)           0         
_________________________________________________________________
flatten (Flatten)            (None, 16384)             0         
_________________________________________________________________
dropout (Dropout)            (None, 16384)             0         
_________________________________________________________________
dense (Dense)                (None, 200)               3277000   
_________________________________________________________________
activation_5 (Activation)    (None, 200)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 402       
_________________________________________________________________
activation_6 (Activation)    (None, 2)                 0         
=================================================================
Total params: 3,311,834
Trainable params: 3,311,354
Non-trainable params: 480
_________________________________________________________________

...也尝试使用 sigmoid 但问题仍然存在...

您不想为定义明确的问题陈述“尝试”激活函数或损失函数。 看来您正在混淆单标签多类和多标签多类架构。

您的 output 是 2 class 多类 output 与softmax激活,这很棒,但您使用binary_crossentropy ,这仅在用于多类问题时才有意义。

您可能想改用categorical_crossentropy 此外,如果存在 class 不平衡,我会建议focal loss ,但您似乎有 50,50 class 比例,所以这不是必需的。

请记住, accuracy是根据使用的损失来决定的! 这里查看不同的类。 当您使用binary_crossentropy时,使用的精度是binaryaccuracy而使用categorical_crossentropy时,它使用categoricalaccuracy

查看此图表以了解在什么类型的问题陈述中使用什么的详细信息。

在此处输入图像描述


除此之外,您的网络在flatten()Dense()处存在瓶颈。 相对于其他层,可训练参数的数量相当高。 我建议使用另一个 CNN 层来使过滤器的数量达到 128 个,并且序列的大小甚至更小。 并减少该密集层的神经元数量。

所有可训练参数的 98.9% (3,277,000/3,311,354) 位于FlattenDense层之间! 不是一个很好的建筑选择!

在上述几点之外,model 结果完全取决于您的数据本身。 如果不了解数据,我将无法提供更多帮助。

我的问题的解决方案是实现 Batch Renormalization: BatchNormalization(renorm=True) 此外,规范化输入有助于大大提高神经网络的整体性能。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM