简体   繁体   中英

How to apply normalization to images in testing phase when using keras ImageDataGenerator?

I'm trying to predict a new image using trained model. My accuracy is 95%. But the predict_classes always return the first label [0] whatever I input. I guess one of the reason is I use featurewise_center=True and samplewise_center=True in ImageDataGenerator . I think I should do the same thing on my input image. But I can't find what did these function do to the image.

Any suggestion will be appreciated.

ImageDataGenerator code:

train_datagen = ImageDataGenerator(
samplewise_center=True,
rescale=1. / 255,
shear_range=30,
zoom_range=30,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2)

test_datagen = ImageDataGenerator(
samplewise_center=True,
rescale=1. / 255,
shear_range=30,
zoom_range=30,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)

Prediction code (I use 100*100*3 image to train the model):

model = load_model('CNN_model.h5')
img = cv2.imread('train/defect/6.png')
img = cv2.resize(img,(100,100))
img = np.reshape(img,[1,100,100,3])
img = img/255.

classes = model.predict_classes(img)

print (classes)

updated 11/14:

I change my code to predict image like below. But the model still predict the same class even if I feed the image which I have used to train my model(and got 95% accuarcy). Is there anything I missed?

model = load_model('CNN_model.h5')
img = cv2.imread('train/defect/6.png')
img = cv2.resize(img,(100,100))
img = np.reshape(img,[1,100,100,3])
img = np.array(img, dtype=np.float64) 
img = train_datagen.standardize(img)

classes = model.predict_classes(img)
print(classes)

You need to use the standardize() method of ImageDataGenerator instance. From Keras documentation :

standardize

 standardize(x) 

Applies the normalization configuration to a batch of inputs.

Arguments

  • x: Batch of inputs to be normalized.

Returns

The inputs, normalized.

So it would be like this:

img = cv2.imread('train/defect/6.png')
img = cv2.resize(img,(100,100))
img = np.reshape(img,[1,100,100,3])
img = train_datagen.standardize(img)

classes = model.predict_classes(img)

Note that it would apply the rescaling as well so there is no need to do it yourself (ie remove img = img/255. ).

Further, keep in mind that since you have set featurewise_ceneter=True you need to use fit() method of generator before using it for training:

train_datagen.fit(training_data)

# then use fit_generator method
model.fit_generator(train_datagen, ...)

Not a complete answer but some information:

From this link that is referenced in keras docs:

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

I think you should do this way for training. Then for test I think using train_datagen.standardize is true.

I think it's the problem that you used cv2 to import your images, because when you use cv2.imread , the channels are not "r,g,b" but "b,g,r".

for example,

import cv2
from tensorflow.keras.preprocessing import image

bgr = cv2.imread('r.jpg')
rgb = np.array(image.load_img('r.jpg'))
print(bgr[1,1,:],rgb[1,1,:])

result:

[ 83 113   0] [  0 114  83]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM