简体   繁体   中英

Keras fit() on ImageDataGenerator() with featurewise_center gives poor validation accuracy

I have a question about using fit() on ImageDataGenerator. I run MNIST testing successfully with Dense layers , in batches.
Following code works perfectly( Validation Accuracy 98.5%).

load

(X_train, y_train), (X_test, y_test) = mnist.load_data()
# separate data into train and validation
from sklearn.model_selection import train_test_split
# Split the data
valid_per = 0.15
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=valid_per, shuffle= True)

N1 = X_train.shape[0] # training size
N2 = X_test.shape[0] # test size
N3 = X_valid.shape[0] # valid size
h = X_train.shape[1]
w = X_train.shape[2]


num_pixels = h*w
# reshape N1 samples to num_pixels
#x_train = X_train.reshape(N1, num_pixels).astype('float32') # shape is now (51000,784)
#x_test = X_test.reshape(N2, num_pixels).astype('float32') # shape is now (9000,784)


y_train = np_utils.to_categorical(y_train) #(51000,10): 10000 lables for 10 classes
y_valid = np_utils.to_categorical(y_valid) #(9000,10): 9000 labels for 10 classes
y_test = np_utils.to_categorical(y_test) # (10000,10): 10000 lables for 10 classes

num_classes = y_test.shape[1]

def baseline_model():
# create model
 model = Sequential()
 # flatten input to (N1,w*h) as fit_generator expects (N1,w*h), but dont' have x,y as inputs(so cant reshape)
 model.add(Flatten(input_shape=(h,w,1)))
 model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
 # Define output layer with softmax function
 model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
 model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
 return model

model = baseline_model()
model.summary()

batch_size = 200
epochs = 20
steps_per_epoch_tr = int(N1/ batch_size) # 51000/200
steps_per_epoch_val = int(N3/batch_size) 

# reshape to be [samples][width][height][ channel] for ImageData Gnerator->datagen.flow
x_t = X_train.reshape(N1, w, h, 1).astype('float32')
x_v = X_valid.reshape(N3, w, h, 1).astype('float32')

# define data preparation
#datagen = ImageDataGenerator(rescale=1./255,featurewise_center= True,featurewise_std_normalization=True,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
datagen = ImageDataGenerator(rescale=1./255,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
#datagen.fit(x_t)
#datagen.fit(x_v)
train_gen = datagen.flow(x_t, y_train, batch_size=batch_size)
valid_gen = datagen.flow(x_v,y_valid, batch_size=batch_size)

model.fit_generator(train_gen,steps_per_epoch = steps_per_epoch_tr,validation_data = valid_gen,
 validation_steps = steps_per_epoch_val,epochs=epochs)

now, if i comment out line 53, and un-comment line 52, 54 and 55, I get validation accuracy of 1%. so, this gives poor accuracy:

datagen = ImageDataGenerator(rescale=1./255,featurewise_center= True,featurewise_std_normalization=True,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
##datagen = ImageDataGenerator(rescale=1./255,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
datagen.fit(x_t)
datagen.fit(x_v)

If I un-comment line 52, but keep lines 54,55 commented out, accuracy is again 98.5%,

datagen = ImageDataGenerator(rescale=1./255,featurewise_center= True,featurewise_std_normalization=True,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
##datagen = ImageDataGenerator(rescale=1./255,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
#datagen.fit(x_t)
#datagen.fit(x_v)

but as per Keras documentation, we need lines 54 and 55 if we use featurewise_center.

在此处输入图片说明 So, I am confused what is going wrong.

You've used both rescaling and feature normalization which is the cause of the problem. Don't use rescaling when doing feature_normalization. This causes all the input values to the network to be negative. Remove, 'rescale=1./255' from ImageDataGenerator.

datagen = ImageDataGenerator(featurewise_center= True,featurewise_std_normalization=True,width_shift_range=0.1,height_shift_range=0.1) # scales x_t
datagen.fit(x_t)

Also, use separate ImageDataGenerators for train and validation since data augmentation is usually done only for training data. And, the mean/std is calculated on the training data and applied on the validation/test data.

Like this:

x_v = (x_v - datagen.mean)/(datagen.std + 1e-6)
datagen_valid = ImageDataGenerator(...)
valid_gen = datagen_valid.flow(x_v, y_valid, batch_size=batch_size)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM