简体   繁体   English

KERAS 低拟合损失和高损失评估

[英]KERAS low fit loss and high loss evaluation

I'm new to keras.我是 keras 的新手。 This code is working on classifying between MRI images of brain with or without tumor.该代码正在对有或没有肿瘤的大脑的 MRI 图像进行分类。 When I run model.evaluate() to see the accuracy I get very high loss value even it is low when I'm training the model(normal less than 1) and I get the following error:当我运行model.evaluate()以查看准确性时,即使在训练模型时损失值很低(正常小于 1),我也会得到非常高的损失值,并且出现以下错误:

WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x00000221AC143AF0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

The most of the code is copied from this link .大部分代码都是从这个链接复制而来的。

Here is the full code:这是完整的代码:

import numpy as np
import matplotlib.pyplot as plt
import os
import cv2

import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D

def load_data( DATADIR, IMG_SIZE, CATEGORIES ):
    data = []
    for category in CATEGORIES:  # do dogs and cats
        
        path = os.path.join(DATADIR,category)  # create path to dogs and cats
        class_num = CATEGORIES.index(category)  # get the classification  (0 or a 1). 0=dog 1=cat

        for img in os.listdir(path):  # iterate over each image per dogs and cats
            try:
                img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)  # convert to array
                
                img_array = cv2.medianBlur(img_array,5)
                
                img_array = cv2.adaptiveThreshold(img_array,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
                
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))  # resize to normalize data size
                
                data.append([new_array, class_num])  # add this to our training_data
            except Exception as e:  # in the interest in keeping the output clean...
                pass
            #except OSError as e:
            #    print("OSErrroBad img most likely", e, os.path.join(path,img))
            #except Exception as e:
            #    print("general exception", e, os.path.join(path,img))
    return data

TRAIN_DATADIR = "F:\Train"
TEST_DATADIR = "F:\Test"

CATEGORIES = ["no", "yes"]
IMG_SIZE = 128
training_data = load_data(TRAIN_DATADIR, IMG_SIZE, CATEGORIES)
testing_data = load_data(TEST_DATADIR, IMG_SIZE, CATEGORIES)

print(len(training_data))

import random
random.shuffle(training_data)
random.shuffle(testing_data)

X_train = []
y_train = []

for features,label in training_data:
    X_train.append(features)
    y_train.append(label)

#print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1))

X_train = np.asarray(X_train)
y_train = np.asarray(y_train)

X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE, 1)


X_test = []
y_test = []

for features,label in testing_data:
    X_test.append(features)
    y_test.append(label)

    
X_test = np.asarray(X_test)
y_test = np.asarray(y_test)
#print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1))

X_test = np.array(X_test).reshape(-1, IMG_SIZE, IMG_SIZE, 1)

X_train = X_train/255.0


model = Sequential()

model.add(Conv2D(32, (3, 3), input_shape = X_train.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X_train, y_train, batch_size=10, epochs=15)

score = model.evaluate(X_test, y_test,verbose=1)

Ignore the warning.忽略警告。

Your low training loss and high evaluation loss means that your model is overfitted.您的低训练损失和高评估损失意味着您的 model 过度拟合。 Stop training when your validation accuracy starts to increase.当您的验证准确性开始增加时停止训练。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM