简体   繁体   English

Keras 连接模型无法学习

[英]Keras Concatenated Model Doesn't learn

i'm trying to build a model that can predict emotions using 7 models concatenated .我正在尝试构建一个模型,该模型可以使用 7 个连接的模型来预测情绪。 Each of the 7 model represents a part of the face: mouth, left_eye, right_eye...ect 7个模型中的每一个都代表脸部的一部分:嘴巴、左眼、右眼……等

the problem is the model doesn't learn at all: from the 2nd epoch to the last one 100 : i have 15% accuracy, no changes in acuracy or loss during all the epochs.问题是模型根本没有学习:从第二个时代到最后一个 100:我有 15% 的准确度,在所有时代中准确度或损失都没有变化。

i think maybe the problem is in my model cocatenated or my fit function ( the train and labels data)我想问题可能出在我的模型 cocatenated 或我的拟合函数中(火车和标签数据)

there is 7 Emotions : sad, angry , happy ....ect有7种情绪:悲伤,愤怒,快乐......等

Here is my model and my compile and train and my datasets Model这是我的模型和我的编译和训练以及我的数据集模型

from keras.layers import Conv2D, MaxPooling2D, Input, concatenate
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Flatten



def build_all_faceparts_model(input_shape,batch_shape,num_classes):
  input1=Input(input_shape)
  input2=Input(input_shape)
  input3=Input(input_shape)
  input4=Input(input_shape)
  input5=Input(input_shape)
  input6=Input(input_shape)
  input7=Input(input_shape)

  # Create the model for right eye
  right_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input1,  batch_input_shape = batch_shape) (input1)
  right_eye=MaxPooling2D(pool_size=(2, 2))(right_eye)
  right_eye=Dropout(0.25)(right_eye)
  right_eye=Flatten()(right_eye)


  # Create the model for leftt eye
  left_eye=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input2,  batch_input_shape = batch_shape) (input2)
  left_eye=MaxPooling2D(pool_size=(2, 2))(left_eye)
  left_eye=Dropout(0.25)(left_eye)
  left_eye=Flatten()(left_eye)

  # Create the model for right eyebrow
  right_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input3,  batch_input_shape = batch_shape) (input3)
  right_eyebrow=MaxPooling2D(pool_size=(2, 2))(right_eyebrow)
  right_eyebrow=Dropout(0.25)(right_eyebrow)
  right_eyebrow=Flatten()(right_eyebrow)


  # Create the model for leftt eye
  left_eyebrow=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input4,  batch_input_shape = batch_shape) (input4)
  left_eyebrow=MaxPooling2D(pool_size=(2, 2))(left_eyebrow)
  left_eyebrow=Dropout(0.25)(left_eyebrow)
  left_eyebrow=Flatten()(left_eyebrow)



  # Create the model for mouth
  mouth=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input5,  batch_input_shape = batch_shape) (input5)
  mouth=MaxPooling2D(pool_size=(2, 2))(mouth)
  mouth=Dropout(0.25)(mouth)
  mouth=Flatten()(mouth)

  # Create the model for nose
  nose=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input6,  batch_input_shape = batch_shape) (input6)
  nose=MaxPooling2D(pool_size=(2, 2))(nose)
  nose=Dropout(0.25)(nose)
  nose=Flatten()(nose)



  # Create the model for jaw
  jaw=Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input7,  batch_input_shape = batch_shape) (input7)
  jaw=MaxPooling2D(pool_size=(2, 2))(jaw)
  jaw=Dropout(0.25)(jaw)
  jaw=Flatten()(jaw)



  concatenated = concatenate([right_eye, left_eye, right_eyebrow, left_eyebrow, mouth, nose, jaw],axis = -1)
  out = Dense(num_classes, activation='softmax')(concatenated)
  model = Model([input1,input2,input3,input4,input5,input6,input7], out)


  return model

train and test datasets Here X_train_all is a list of datasets, not like y_train_all训练和测试数据集这里的 X_train_all 是一个数据集列表,不像 y_train_all

X_train_all=[X_train_mouth,X_train_right_eyebrow,X_train_left_eyebrow,X_train_right_eye,X_train_left_eye,X_train_nose,X_train_jaw]


X_test_all=[X_test_mouth,X_test_right_eyebrow,X_test_left_eyebrow,X_test_right_eye,X_test_left_eye,X_test_nose,X_test_jaw]

y_train_all=y_train_mouth+y_train_right_eyebrow+y_train_left_eyebrow+y_train_right_eye+y_train_left_eye+y_train_nose+y_train_jaw

y_test_all=y_test_mouth+y_test_right_eyebrow+y_test_left_eyebrow+y_test_right_eye+y_test_left_eye+y_test_nose+y_test_jaw

compile编译

from keras.optimizers import Adam
input_shape =X_train_mouth[0].shape
batch_shape = X_train_mouth[0].shape


model_all_faceparts=build_all_faceparts_model(input_shape,batch_shape,7)

#Compile Model
model_all_faceparts.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-3),metrics=["accuracy"])


lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=3)
early_stopper = EarlyStopping(monitor='val_acc', min_delta=0, patience=15, mode='auto')
checkpointer = ModelCheckpoint(current_dir+'/weights_jaffe.hd5', monitor='val_loss', verbose=1, save_best_only=True)

Train火车

history=model_all_faceparts.fit(
          X_train_all, y_train_all, batch_size=7, epochs=100, verbose=1,callbacks=[lr_reducer, checkpointer, early_stopper])

output输出

    Epoch 1/100
181/181 [==============================] - 19s 107ms/step - loss: 94.6603 - acc: 0.1271
Epoch 2/100
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:1109: RuntimeWarning: Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,acc,lr
  (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:434: RuntimeWarning: Can save best model only with val_loss available, skipping.
  'skipping.' % (self.monitor), RuntimeWarning)
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:569: RuntimeWarning: Early stopping conditioned on metric `val_acc` which is not available. Available metrics are: loss,acc,lr
  (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 3/100
181/181 [==============================] - 15s 81ms/step - loss: 95.9962 - acc: 0.1492
Epoch 4/100
181/181 [==============================] - 15s 83ms/step - loss: 95.9962 - acc: 0.1492
Epoch 5/100
181/181 [==============================] - 15s 84ms/step - loss: 95.9962 - acc: 0.1492
Epoch 6/100
181/181 [==============================] - 15s 85ms/step - loss: 95.9962 - acc: 0.1492
Epoch 7/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 8/100
181/181 [==============================] - 16s 87ms/step - loss: 95.9962 - acc: 0.1492
Epoch 9/100
181/181 [==============================] - 16s 86ms/step - loss: 95.9962 - acc: 0.1492
Epoch 10/100

(I completly forgot this post) The problem was in the model itself, i just changed the model (added some layers) and everything was fine concluding to 93% accuracy! (我完全忘记了这篇文章)问题出在模型本身,我只是改变了模型(添加了一些层),一切都很好,准确率达到了 93%!

PS: thanks to the tensorflow support guy that did remind me to post an answer PS:感谢 tensorflow 支持人员提醒我发布答案

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM