简体   繁体   中英

The model.predict(),model.predict_classes() and model.predict_on_batch() seems to produce no result

I have created a model that makes use of deep learning to classify the input data using CNN. The classification is multi-class though, actually with 5 classes. On training the model seems to be fine, ie it doesn't overfit or underfit. Yet, on saving and loading the model I always get the same output regardless of the input image. The final prediction array contains the output as 0 for all the classes.

So, I am not sure if the model doesn't predict anything or it always produces the same result.

The model created by me after using tensorboard to find the best fit model is below.

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
import pickle
import time

X=pickle.load(open("X.pickle","rb"))
y=pickle.load(open("y.pickle","rb"))

X=X/255.0

dense_layers=[0]
layer_sizes=[64]
conv_layers=[3]


for dense_layer in dense_layers:
    for layer_size in layer_sizes:
        for conv_layer in conv_layers:
            NAME="{}-conv-{}-nodes-{}-dense-{}".format(conv_layer,layer_size,dense_layer,int(time.time()))
            print(NAME)

            tensorboard=TensorBoard(log_dir='logs\{}'.format(NAME))

            model = Sequential()

            model.add(Conv2D(layer_size, (3,3), input_shape=X.shape[1:]))
            model.add(Activation('relu'))
            model.add(MaxPooling2D(pool_size=(2,2)))

            for l in range(conv_layer-1):
                model.add(Conv2D(layer_size, (3,3)))
                model.add(Activation('relu'))
                model.add(MaxPooling2D(pool_size=(2,2)))

            model.add(Flatten())

            for l in range(dense_layer):
                model.add(Dense(layer_size))
                model.add(Activation('relu'))

            model.add(Dense(5))
            model.add(Activation('sigmoid'))

            model.compile(loss='sparse_categorical_crossentropy',
                         optimizer='adam',
                         metrics=['accuracy'])

            model.fit(X,y,batch_size=32,epochs=10,validation_split=0.3,callbacks=[tensorboard])

model.save('0x64x3-CNN-latest.model')

The loading model snippet is as below,

import cv2
import tensorflow as tf

CATEGORIES= ["fifty","hundred","ten","thousand","twenty"]

def prepare(filepath):
    IMG_SIZE=100
    img_array=cv2.imread(filepath)
    new_array=cv2.resize(img_array,(IMG_SIZE,IMG_SIZE))
    return new_array.reshape(-1,IMG_SIZE,IMG_SIZE,3)

model=tf.keras.models.load_model("0x64x3-CNN-latest.model")

prediction=model.predict([prepare('30.jpg')])

print(prediction)

The output is always [[0. 0. 0. 0. 0.]] [[0. 0. 0. 0. 0.]] .

On converting to categories, it always results in fifty.

My dataset contains almost 2200 images with an average of 350-500 images for each class.

Can someone help out with this..?

I see that when you train, you normalize your images:

X = X/255.0

but when you test, ie, in prediction time, you just read your image and resize but not normalize. Try:

def prepare(filepath):
    IMG_SIZE=100
    img_array=cv2.imread(filepath)
    img_array = img_array/255.0
    new_array=cv2.resize(img_array,(IMG_SIZE,IMG_SIZE))
    return new_array.reshape(-1,IMG_SIZE,IMG_SIZE,3)

and also, your prepare function returns your image in 4 dimensions (including the batch dimension), so when you call predict , you do not have to give the input as a list. Instead of:

prediction=model.predict([prepare('30.jpg')])

you should do:

prediction=model.predict(prepare('30.jpg'))

Hope it helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM