简体   繁体   English

ValueError:检查目标时出错:预期 activation_6 具有形状(70,)但得到形状为(71,)的数组

[英]ValueError: Error when checking target: expected activation_6 to have shape (70,) but got array with shape (71,)

I am creating face recognition using CNN.我正在使用 CNN 创建人脸识别。 I was following a tutorial.我正在学习教程。 I am using Tensorflow==1.15.我正在使用 Tensorflow==1.15。

The programme will take 70 snaps of the user's face and save them in the folder 'dataset'该程序将拍摄 70 张用户面部快照并将它们保存在“数据集”文件夹中

I keep getting the error:我不断收到错误:

ValueError: Error when checking target: expected activation_6 to have shape (70,) but got array with shape (71,) ValueError:检查目标时出错:预期 activation_6 具有形状(70,)但得到形状为(71,)的数组

input shapes - (32,32,1)输入形状 - (32,32,1)

classes(n_classes) - 70类(n_classes) - 70


K.clear_session()
n_faces = len(set(ids))

model = model((32,32,1),n_faces) #Calling Model given in next code block
faces = np.asarray(faces)
faces = np.array([downsample_image(ab) for ab in faces])
ids = np.asarray(ids)
faces = faces[:,:,:,np.newaxis]
print("Shape of Data: " + str(faces.shape))
print("Number of unique faces : " + str(n_faces))


ids = to_categorical(ids)

faces = faces.astype('float32')
faces /= 255.


x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0)

print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)

checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc',
                                           save_best_only=True, save_weights_only=True, verbose=1)
                                    
model.fit(x_train, y_train,
             batch_size=32,
             epochs=10,
             validation_data=(x_test, y_test),
             shuffle=True,callbacks=[checkpoint])


def model(input_shape,num_classes):    

    model = Sequential()

    model.add(Conv2D(32, (3, 3), input_shape=input_shape))
    model.add(Activation("relu"))

    model.add(Conv2D(64, (3, 3)))
    model.add(BatchNormalization())
    model.add(Activation("relu"))

    model.add(Conv2D(64, (1, 1)))
    model.add(Dropout(0.5))
    model.add(BatchNormalization())
    model.add(Activation("relu"))

    model.add(Conv2D(128, (3, 3)))
    model.add(Dropout(0.5))
    model.add(Activation("relu"))

    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Conv2D(64, (1, 1)))
    model.add(Activation("relu"))

    model.add(Flatten())
    model.add(Dense(32))
    model.add(Dense(num_classes))
    model.add(Activation("softmax"))
    
    model.compile(loss='categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])

    model.summary()
    return model

Output Output







Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 30, 30, 32)        320       
_________________________________________________________________
activation_1 (Activation)    (None, 30, 30, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 28, 28, 64)        18496     
_________________________________________________________________
batch_normalization_1 (Batch (None, 28, 28, 64)        256       
_________________________________________________________________
activation_2 (Activation)    (None, 28, 28, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 28, 28, 64)        4160      
_________________________________________________________________
dropout_1 (Dropout)          (None, 28, 28, 64)        0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 28, 28, 64)        256       
_________________________________________________________________
activation_3 (Activation)    (None, 28, 28, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 26, 26, 128)       73856     
_________________________________________________________________
dropout_2 (Dropout)          (None, 26, 26, 128)       0         
_________________________________________________________________
activation_4 (Activation)    (None, 26, 26, 128)       0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 128)       0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 13, 13, 64)        8256      
_________________________________________________________________
activation_5 (Activation)    (None, 13, 13, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 10816)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 32)                346144    
_________________________________________________________________
dense_2 (Dense)              (None, 70)                2310      
_________________________________________________________________
activation_6 (Activation)    (None, 70)                0         
=================================================================
Total params: 454,054
Trainable params: 453,798
Non-trainable params: 256
_________________________________________________________________
Shape of Data: (70, 32, 32, 1)
Number of unique faces : 70

I am calculating x_train, x_test, y_train, y_test as shown below我正在计算 x_train、x_test、y_train、y_test,如下所示

x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0)

Output Output

x_train - (56, 32, 32, 1) x_train - (56, 32, 32, 1)

y_train - (56, 71) y_train - (56, 71)

x_test - (14, 32, 32, 1) x_test - (14, 32, 32, 1)

y_test - (14, 71) y_test - (14, 71)

What I am doing wrong with the dimensions of CNN layers?我在 CNN 层的尺寸上做错了什么? Please Help请帮忙

In your model.summary() output, you see that your final dense layer has shape (None, 70).在您的 model.summary() output 中,您会看到最终的密集层具有形状(无,70)。 None stands for your batch size, which is currently not known. None 代表您的批量大小,目前未知。 70 then is the dimensionality of the output for each of your images.然后 70 是每个图像的 output 的维度。

From your y_train and y_pred, it seems like you want to output 71 classes, not 70, and thus the dimensions do not match.从您的 y_train 和 y_pred 来看,您似乎想要 output 71 个类,而不是 70 个,因此尺寸不匹配。 You can try to change your last dense layer to您可以尝试将最后一个密集层更改为

model.add(Dense(num_classes+1))

This should work.这应该有效。 I do not know the reason why your y values do not have the same length as your number of classes.我不知道为什么您的 y 值的长度与您的班级数量不同。 One reason could be, that there is one class for "nothing", so the class that is supposed to be selected in no other class is correct.一个原因可能是,有一个 class 表示“无”,因此应该在其他 class 中选择的 class 是正确的。 This could explain why you need a 71-dimensional output if you have 70 classes.这可以解释为什么如果你有 70 个类,你需要一个 71 维的 output。

I am suspecting that ids has the shape (row, col) of (70,71) - where 70 being the number of instances and 71 being the softmax vector for class. (I got this by adding x_train.shape[0]=56 and x_test.shape[0]=14)我怀疑ids的形状 (row, col) 为(70,71) - 其中 70 是实例数,71 是 class 的 softmax 向量。(我通过添加 x_train.shape[0]=56 得到了这个和 x_test.shape[0]=14)

In this line n_faces = len(set(ids)) , the set method is checking for unique lists (softmax vector of each class), then len method gives you the number of instances which is 70.在这一行n_faces = len(set(ids))中, set方法正在检查唯一列表(每个类的 softmax 向量),然后len方法为您提供实例数,即 70。

In train_test_split , the y parameter is the entire ids , hence it splits along the row (70 instances) while retain the softmax vector of each instance (71 dimension vector).train_test_split中, y参数是整个ids ,因此它沿行(70 个实例)拆分,同时保留每个实例的 softmax 向量(71 维向量)。

This could explain why your model has 70 dimension output while you actually need 71 dimension output.这可以解释为什么你的 model 有 70 个维度 output 而你实际上需要 71 个维度 output。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 ValuError:检查目标时出错:预期activation_6具有形状(70,)但得到形状为(71,)的数组 - ValuError: Error when checking target: expected activation_6 to have shape (70,) but got array with shape (71,) 检查目标时出错:预期 activation_6 有 3 个维度,但得到了形状为 (70612, 1) 的数组 - Error when checking target: expected activation_6 to have 3 dimensions, but got array with shape (70612, 1) ValueError:检查目标时出错:预期 activation_9 具有形状 (74, 6) 但得到形状为 (75, 6) 的数组 - ValueError: Error when checking target: expected activation_9 to have shape (74, 6) but got array with shape (75, 6) ValueError:检查目标时出错:预期activation_1的形状为(158,),但数组的形状为(121,) - ValueError: Error when checking target: expected activation_1 to have shape (158,) but got array with shape (121,) ValueError:检查目标时出错:预期激活具有形状 (1,) 但得到形状为 (2,) 的数组 - ValueError: Error when checking target: expected activation to have shape (1,) but got array with shape (2,) Keras - “ValueError:检查目标时出错:预期activation_1具有形状(无,9)但得到的数组具有形状(9,1) - Keras - "ValueError: Error when checking target: expected activation_1 to have shape (None, 9) but got array with shape (9,1) ValueError:检查目标时出错:预期activation_5 的形状为(1,),但数组的形状为(100,) - ValueError: Error when checking target: expected activation_5 to have shape (1,) but got array with shape (100,) ValueError:检查目标时出错:预期 activation_10 有 2 个维度,但得到了形状为 (118, 50, 1) 的数组 - ValueError: Error when checking target: expected activation_10 to have 2 dimensions, but got array with shape (118, 50, 1) Keras LSTM ValueError:检查目标时出错:预期dense_23的形状为(1,),但数组的形状为(70,) - Keras LSTM ValueError: Error when checking target: expected dense_23 to have shape (1,) but got array with shape (70,) keras:ValueError:检查模型目标时出错:预期activation_1具有形状(无,60),但数组的形状为(10,100) - keras: ValueError: Error when checking model target: expected activation_1 to have shape (None, 60) but got array with shape (10, 100)
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM