简体   繁体   English

如何修复分类分类中的“val_accuracy: 0.0000e+00”?

[英]How to fix “val_accuracy: 0.0000e+00” in categorical classification?

I am new to deep learning, I have 3 classes to classify, when I train my model I observed that my "val_loss > val_accuracy " means my model is overfitting how can I fix this?我是深度学习的新手,我有 3 个类要分类,当我训练我的 model 时,我观察到我的“val_loss > val_accuracy”意味着我的 model 过拟合了我该如何解决这个问题? also I get "val_accuracy: 0.0000e+00" this way.我也这样得到“val_accuracy:0.0000e+00” Initially I have kept my epoch to low.最初,我将我的时代保持在低水平。 I have less data to train a model.我训练 model 的数据较少。

import tensorflow as tf           
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator, image
import numpy as np
import os
from pathlib import Path
country  = "armenia"
cwd = os.getcwd()
print("cwd",cwd)
save_path = r'E://paymentz//'+country+'/'
abc  = os.listdir(r'E:/paymentz/'+country+'/training')
print("list of subfolders in directory:",abc)

model = Sequential()
model.add(Convolution2D(16, 2, 2, input_shape = ( 64, 64, 3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2,2)))
model.add(Dropout(0.5))
model.add(Convolution2D(32, 3, 3, activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2,2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(output_dim= 64, activation='relu' ))
output_dim = os.listdir(r'E:/paymentz/'+country+'/training')
print(len(output_dim))
output_dim = len(output_dim)
model.add(Dense(output_dim , activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =['accuracy'])
batch_size = 16
train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True,
                               rotation_range = 360)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = test_datagen.flow_from_directory(r'E:/paymentz/'+country+'/training',
                                            target_size = (64, 64),
                                            batch_size= batch_size,
                                            class_mode='categorical')
test_set = test_datagen.flow_from_directory(r'E:/paymentz/'+country+'/testing',
                                        target_size= (64, 64),
                                        batch_size= batch_size,
                                        class_mode='categorical')
training_path = Path(fr'E://paymentz//{country}//training')
training_png_count = len(list(training_path.rglob('*.png')))
training_jpg_count = len(list(training_path.rglob('*.jpg')))
training_jpeg_count = len(list(training_path.rglob('*.jpeg')))
          
training_count = training_png_count + training_jpg_count + training_jpeg_count
print("training_count ", training_count)
            
testing_path = Path(fr'E://paymentz//{country}//testing')
testing_png_count = len(list(testing_path.rglob('*.png')))
testing_jpg_count = len(list(testing_path.rglob('*.jpg')))
testing_jpeg_count = len(list(testing_path.rglob('*.jpeg')))
            
testing_count = testing_png_count + testing_jpg_count + testing_jpeg_count
print("testing_count ", testing_count)              

steps_per_epoch = (training_count// batch_size )
print("steps_per_epoch", steps_per_epoch)
validation_steps = ( testing_count // batch_size )
print("validation_steps", validation_steps) 
    
model.fit_generator(
      training_set,
      validation_data = test_set,
      samples_per_epoch = training_count, 
      epochs = 15,
      validation_steps = validation_steps,
      steps_per_epoch = steps_per_epoch)
print("training done.")
score = model.fit(test_set)
score= model.evaluate_generator(test_set)
print("test_set ",score)
#score= model.evaluate_generator(training_set)
#print("training_set ", score)
#if score[0] < 0.05 and score [1] < .85:
save_path = r'E:/paymentz/'+country+'/'
model.save(save_path+country+'.model')
model.save(save_path+country+'.model.h5')
#model.save("StatewiseDLmodel.model.h5")
print("model saved")

abc  = os.listdir(r'E:/paymentz/'+country+'/training')

model_path = r''+country+'model.h5'

#model = tf.keras.models.load_model(country+'.model.h5')
print("model trained to:",score)```

Why are you fitting your model again on the test set?你为什么要在测试装置上再次安装你的 model?

print("training done.")
score = model.fit(test_set) # this should be predict.
score= model.evaluate_generator(test_set)
print("test_set ",score)

remove the following line and try again.删除以下行,然后重试。

score = model.fit(test_set)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 keras val_loss:0.0000e+00 - val_accuracy:0.0000e+00 - keras val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 CNN 精度:0.0000e+00 用于图像的多分类 - CNN accuracy: 0.0000e+00 for multi-classification on images 获得精度:0.0000e+00 在我的张量流 model - Getting accuracy: 0.0000e+00 in my Tensor flow model 为什么我的准确率总是0.0000e+00,而且损失巨大? - Why is my accuracy always 0.0000e+00, and loss and huge? Keras 损失:0.0000e+00 且精度保持不变 - Keras loss: 0.0000e+00 and accuracy stays constant 如何解决 LSTM 问题中的 loss: nan &amp;accuracy: 0.0000e+00? 张量流 2.x - How to solve loss: nan & accuracy: 0.0000e+00 in a LSTM problem? Tensorflow 2.x “损失:0.0000e+00 - acc: 1.0000 - val_loss: 0.0000e+00 - val_acc: 1.0000”是什么意思? - what does "loss: 0.0000e+00 - acc: 1.0000 - val_loss: 0.0000e+00 - val_acc: 1.0000" mean? Conv2d Tensorflow 结果错误 - 准确度 = 0.0000e+00 - Conv2d Tensorflow results wrong - accuracy = 0.0000e+00 Keras 图像分类 val_accuracy 没有提高 - Keras image classification val_accuracy doesnt improve cnn model 用于二进制分类,86% val_accuracy 总是返回 1 - cnn model for binary classification with 86% val_accuracy always returning 1
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM