简体   繁体   中英

Keras Concatenated Model Doesn't learn

i'm trying to build a model that can predict emotions using 7 models concatenated . Each of the 7 model represents a part of the face: mouth, left_eye, right_eye...ect

the problem is the model doesn't learn at all: from the 2nd epoch to the last one 100 : i have 15% accuracy, no changes in acuracy or loss during all the epochs.

i think maybe the problem is in my model cocatenated or my fit function ( the train and labels data)

there is 7 Emotions : sad, angry , happy ....ect

Here is my model and my compile and train and my datasets Model

from keras.layers import Conv2D, MaxPooling2D, Input, concatenate
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Flatten



def build_all_faceparts_model(input_shape,batch_shape,num_classes):
  input1=Input(input_shape)
  input2=Input(input_shape)
  input3=Input(input_shape)
  input4=Input(input_shape)
  input5=Input(input_shape)
  input6=Input(input_shape)
  input7=Input(input_shape)

  # Create the model for right eye
  right_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input1,  batch_input_shape = batch_shape) (input1)
  right_eye=MaxPooling2D(pool_size=(2, 2))(right_eye)
  right_eye=Dropout(0.25)(right_eye)
  right_eye=Flatten()(right_eye)


  # Create the model for leftt eye
  left_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input2,  batch_input_shape = batch_shape) (input2)
  left_eye=MaxPooling2D(pool_size=(2, 2))(left_eye)
  left_eye=Dropout(0.25)(left_eye)
  left_eye=Flatten()(left_eye)

  # Create the model for right eyebrow
  right_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input3,  batch_input_shape = batch_shape) (input3)
  right_eyebrow=MaxPooling2D(pool_size=(2, 2))(right_eyebrow)
  right_eyebrow=Dropout(0.25)(right_eyebrow)
  right_eyebrow=Flatten()(right_eyebrow)


  # Create the model for leftt eye
  left_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input4,  batch_input_shape = batch_shape) (input4)
  left_eyebrow=MaxPooling2D(pool_size=(2, 2))(left_eyebrow)
  left_eyebrow=Dropout(0.25)(left_eyebrow)
  left_eyebrow=Flatten()(left_eyebrow)



  # Create the model for mouth
  mouth=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input5,  batch_input_shape = batch_shape) (input5)
  mouth=MaxPooling2D(pool_size=(2, 2))(mouth)
  mouth=Dropout(0.25)(mouth)
  mouth=Flatten()(mouth)

  # Create the model for nose
  nose=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input6,  batch_input_shape = batch_shape) (input6)
  nose=MaxPooling2D(pool_size=(2, 2))(nose)
  nose=Dropout(0.25)(nose)
  nose=Flatten()(nose)



  # Create the model for jaw
  jaw=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input7,  batch_input_shape = batch_shape) (input7)
  jaw=MaxPooling2D(pool_size=(2, 2))(jaw)
  jaw=Dropout(0.25)(jaw)
  jaw=Flatten()(jaw)



  concatenated = concatenate([right_eye, left_eye, right_eyebrow, left_eyebrow, mouth, nose, jaw],axis = -1)
  out = Dense(num_classes, activation='softmax')(concatenated)
  model = Model([input1,input2,input3,input4,input5,input6,input7], out)


  return model

train and test datasets Here X_train_all is a list of datasets, not like y_train_all

X_train_all=[X_train_mouth,X_train_right_eyebrow,X_train_left_eyebrow,X_train_right_eye,X_train_left_eye,X_train_nose,X_train_jaw]


X_test_all=[X_test_mouth,X_test_right_eyebrow,X_test_left_eyebrow,X_test_right_eye,X_test_left_eye,X_test_nose,X_test_jaw]

y_train_all=y_train_mouth+y_train_right_eyebrow+y_train_left_eyebrow+y_train_right_eye+y_train_left_eye+y_train_nose+y_train_jaw

y_test_all=y_test_mouth+y_test_right_eyebrow+y_test_left_eyebrow+y_test_right_eye+y_test_left_eye+y_test_nose+y_test_jaw

compile

from keras.optimizers import Adam
input_shape =X_train_mouth[0].shape
batch_shape = X_train_mouth[0].shape


model_all_faceparts=build_all_faceparts_model(input_shape,batch_shape,7)

#Compile Model
model_all_faceparts.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-3),metrics=["accuracy"])


lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=3)
early_stopper = EarlyStopping(monitor='val_acc', min_delta=0, patience=15, mode='auto')
checkpointer = ModelCheckpoint(current_dir+'/weights_jaffe.hd5', monitor='val_loss', verbose=1, save_best_only=True)

Train

history=model_all_faceparts.fit(
          X_train_all, y_train_all, batch_size=7, epochs=100, verbose=1,callbacks=[lr_reducer, checkpointer, early_stopper])

output

    Epoch 1/100
181/181 [==============================] - 19s 107ms/step - loss: 94.6603 - acc: 0.1271
Epoch 2/100
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:1109: RuntimeWarning: Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,acc,lr
  (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:434: RuntimeWarning: Can save best model only with val_loss available, skipping.
  'skipping.' % (self.monitor), RuntimeWarning)
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:569: RuntimeWarning: Early stopping conditioned on metric `val_acc` which is not available. Available metrics are: loss,acc,lr
  (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 3/100
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 4/100
181/181 [==============================] - 15s 83ms/step - loss: 95.9962 - acc: 0.1492
Epoch 5/100
181/181 [==============================] - 15s 84ms/step - loss: 95.9962 - acc: 0.1492
Epoch 6/100
181/181 [==============================] - 15s 85ms/step - loss: 95.9962 - acc: 0.1492
Epoch 7/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 8/100
181/181 [==============================] - 16s 87ms/step - loss: 95.9962 - acc: 0.1492
Epoch 9/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 10/100

(I completly forgot this post) The problem was in the model itself, i just changed the model (added some layers) and everything was fine concluding to 93% accuracy!

PS: thanks to the tensorflow support guy that did remind me to post an answer

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM