简体   繁体   中英

How to develop a convolutional neural network to differentiate images with similar features?

I am currently developing a convolutional neural network in a keras framework using tensorflow backend that is going to be used to differentiate between a passed or failed indicator. The difference between the two (determining if it is a pass or a fail) is in a small colour change within a tube. However, when I am training the convolutional neural network on the images (approximately 1500 pictures of each) the network seems to always predict passes regardless of the image. My guess is that this is due to the vast similarities in the two but I am not sure why it is unable to detect this colour change as a differentiating feature.

The code that I am currently using to build the classifier is below to provide a reference of where the classifier may be building up such a bias.

# Imports from Keras Library to build Network
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Activation
from keras.callbacks import ModelCheckpoint
from keras.layers import BatchNormalization
# Initialising the CNN as a sequential network
classifier = Sequential()

# Addition of convultional layer
classifier.add(Conv2D(32, kernel_size=(3, 3), input_shape = (356, 356, 3)))
# Adding a dropout to prevent overstabilization on certain nodes

# Adding a second/third/fourth convolutional/pooling/dropout layer
classifier.add(BatchNormalization())
classifier.add(Activation("relu"))
classifier.add(Conv2D(32, (3, 3)))
classifier.add(BatchNormalization())
classifier.add(Activation("relu"))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.25))

classifier.add(Conv2D(32, (3, 3)))
classifier.add(BatchNormalization())
classifier.add(Activation("relu"))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.25))
classifier.add(Conv2D(64, (3, 3)))
classifier.add(BatchNormalization())
classifier.add(Activation("relu"))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.25))

# Flattening Layer
classifier.add(Flatten())
# Full connection using dense layers
classifier.add(Dense(units = 128))
classifier.add(BatchNormalization())
classifier.add(Activation("relu"))  
classifier.add(Dense(units = 2, activation = 'softmax'))

# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
classifier.summary()

# Fitting the CNN to the images

from keras.preprocessing.image import ImageDataGenerator

# Taining image generator (causes variation in how images may appear when trained upon)
train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.4,
                                   zoom_range = 0.4,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

# Creation of training set
training_set = train_datagen.flow_from_directory('dataset/TrainingSet',
                                                 target_size = (356, 356),
                                                 batch_size = 32,
                                                 class_mode = 'categorical',
                                                 shuffle = True)

# Creation of test set
test_set = test_datagen.flow_from_directory('dataset/TestSet',
                                            target_size = (356, 356),
                                            batch_size = 32,
                                            class_mode = 'categorical',
                                            shuffle = True)

caller = ModelCheckpoint('/Users/anishkhanna/Documents/Work/BI Test/BI Models/part3.weights.{epoch:02d}-{val_loss:.2f}.hdf5', monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)
# Training the model based on above set
# Can also be improved with more images
classifier.fit_generator(training_set,
                         steps_per_epoch = 200,
                         epochs = 200,
                         validation_data = test_set,
                         validation_steps = 15,
                         shuffle = True,
                         callbacks = [caller])

# Creates a HDF5 file to save the imformation of the model so it can be used later without retraining
classifier.save('BI_Test_Classifier_model.h5')

# Deletes the existing model
del classifier  

If there are some improvements to the model that I could make or suggestions to it would be much appreciated.

If your distinguishing feature is mainly the colour, you can pre-process to help the neural network. In this case, you can convert RGB into Hue Saturation Value (HSV) and just use for example the Hue channel which will contain information about the colour of pixels and ignore shading etc. Here is a post on that and you can use it as preprocessing_function for ImageDataGenerator .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM