简体   繁体   English

提高 CNN 模型的准确性(陷入非常低的准确性)

[英]Increase accuracy of CNN model (stuck at really low accuracy)

Hereby is my code: for CNN training on image recognition特此是我的代码:用于 CNN 图像识别训练

python Python

# definiton of code

def make_model():


    model = Sequential()
    model.add(Conv2D(16, (3,3),input_shape = (32,32,3), padding = "same", 
    kernel_initializer="glorot_uniform"))
    model.add(LeakyReLU(alpha=0.1))

    model.add(Conv2D(32, (3,3),input_shape = (32,32,3), padding = "same", 
    kernel_initializer="glorot_uniform"))
    model.add(LeakyReLU(alpha=0.1))
    model.add(MaxPooling2D(pool_size = (2,2),padding = "same"))
    model.add(Dropout(0.25))*

  

    model.add(Conv2D(32,(3,3), input_shape = (32,32,3), padding = "same"))
    
    model.add(LeakyReLU(alpha=0.1))

    model.add(Conv2D(64, (3,3),input_shape = (32,32,3), padding = "same"))
    model.add(LeakyReLU(alpha=0.1))

    model.add(MaxPooling2D(pool_size = (2,2),padding = "same"))
    model.add(Dropout(0.25))

    *layer*

    model.add(Flatten())
    model.add(Dense(256))

    *for activation*
    model.add(LeakyReLU(alpha=0.1))
    model.add(Dropout(0.5))
    
    model.add(Dense(10))

    *for activation*
    model.add(LeakyReLU(alpha=0.1))

    model.add(Activation("softmax"))

And then it stuck around with the result which freak me out:然后它留下了让我感到害怕的结果:

loss: 7.4918; acc: 0.1226. 

I have been trying few more way but I don't know exactly what I should do for the right path.我一直在尝试更多的方法,但我不知道我应该为正确的道路做什么。

Without details of the problem, it is difficult to investigate more.如果没有问题的详细信息,就很难进行更多的调查。

But I would encourage you to look more into :但我鼓励你多看看:

  • BatchNormalization
  • loss function损失函数
  • learning rate学习率
  • optimizer优化器
  • hidden layers隐藏层

The current state of the art is to apply convolution together with Batch Normalization and ReLU activation.当前最先进的技术是将卷积与批量归一化和 ReLU 激活一起应用。 The order should be the following:顺序应如下:

  1. Convolution卷积
  2. Batch Normalization批量标准化
  3. ReLU(It could also be leaky ReLu or any other activation) ReLU(也可能是泄漏的 ReLu 或任何其他激活)

So you should add BN after your convolutions and then you should also remove DropOut.所以你应该在你的卷积之后添加 BN,然后你还应该删除 DropOut。 It has been studied by many researchers that Dropout is not needed if BN is used and BN performs actually better.许多研究人员已经研究过,如果使用BN,则不需要Dropout,并且BN的性能实际上更好。

Other than this you should probably play around with the parameters like learning rate, number of filters and etc.除此之外,您可能应该使用学习率、过滤器数量等参数。

Also make sure that you are using a correct loss and a output activation corresponding your loss.还要确保您使用了正确的损失和对应于您的损失的输出激活。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM