繁体   English   中英

ValueError:检查目标时出错:预期activation_1的形状为(158,),但数组的形状为(121,)

[英]ValueError: Error when checking target: expected activation_1 to have shape (158,) but got array with shape (121,)

训练CNN时出现以下错误:

追溯(最近一次通话最近):文件“ train_and_test.py”,第66行,位于H = model.fit(trainX,trainY,validation_data =(testX,testY),batch_size = 32,epochs = 100,verbose = 1)文件“ /usr/local/lib/python3.6/dist-packages/keras/engine/training.py”,第972行,适合batch_size = batch_size)文件“ /usr/local/lib/python3.6/dist-packages /keras/engine/training.py”,第789行,_standardize_user_data exception_prefix ='target')文件“ /usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py”,第138行,在standardize_input_data str(data_shape)中)ValueError:检查目标时出错:预期Activation_1具有形状(158,)但具有形状(121,)的数组

Activation_1是我网络的最后一层,它应该有一个大小为158的数组作为输入,因为我的问题有158个类。 我建立模型是这样的:

model = DeepIrisNet_A.build(width=128, height=128, depth=1, classes=158)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])

现在有一件奇怪的事情:如果我在类参数中输入一个不同于158的数字X,该错误说明:

ValueError:检查目标时出错:预期activation_1的形状为(X,),但数组的形状为(158,)

因此,输入数组的尺寸正确! 但是每次我使用正确的值时,输入数组就永远不会(158,)形状。

我哪里错了? 有什么建议么?

编辑-这是我的一些代码:

这是用于培训和测试CNN

from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from datasets import UtirisLoader
from models import DeepIrisNet_A
from utilities import ResizerPreprocessor
from utilities import ConvertColorSpacePreprocessor
from keras.optimizers import SGD
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import tensorflow as tf

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True, help="path to input dataset")
ap.add_argument("-o", "--output", required=True, help="path to the output loss/accuracy plot")
args = vars(ap.parse_args())

# grab the list of images that we’ll be describing
print("[INFO] loading images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the image preprocessor
rp = ResizerPreprocessor(128, 128)
ccsp = ConvertColorSpacePreprocessor()

# load the dataset from disk then scale the raw pixel intensities to the range [0, 1]
utiris = UtirisLoader(preprocessors=[rp, ccsp])
(data, labels) = utiris.load_infrared(imagePaths, verbose=100)


# print some infos
print("DATA LENGTH: {}".format(len(data)))
print("LABELS LENGTH: {}".format(len(labels)))

unique = np.unique(labels, return_counts=False)
print("LABELS COUNT: {}".format(len(unique)))


# convert data to float
data = data.astype("float") / 255.0

# partition the data into training and testing splits using 75% of the data for training
# and the remaining 25% for testing
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.25, random_state=42)
#trainX = np.resize(trainX, (-1, 128, 128, 1))
trainX = trainX.reshape((trainX.shape[0], 128, 128, 1))
testX = testX.reshape((testX.shape[0], 128, 128, 1))

# convert the labels from integers to vectors
trainY = LabelBinarizer().fit_transform(trainY)
testY = LabelBinarizer().fit_transform(testY)

print("trainY: {}".format(trainY))

# initialize the optimizer and model_selection
print("[INFO] compiling model...")
opt = SGD(lr=0.01, momentum=0.9)
model = DeepIrisNet_A.build(width=128, height=128, depth=1, classes=158)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])

#train the network
print("[INFO] training network...")
H = model.fit(trainX, trainY, validation_data=(testX, testY), batch_size=32, epochs=100, verbose=1)

# evaluate the network
print("[INFO] evaluating network...")
predictions = model.predict(testX, batch_size=32)
print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=["cat", "dog", "panda"]))
# plot the training loss and accuracy
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, 100), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, 100), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, 100), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, 100), H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend()
plt.savefig(args["output"])

这是CNN的结构

from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras import backend as K

class DeepIrisNet_A:
    @staticmethod
    def build(width, height, depth, classes):
        # initialize the models along with the input shape to be "channels last" and the channels dimension itself
        model = Sequential()
        inputShape = (height, width, depth)
        chanDim = -1 # the index of the channel dimension, needed for batch normalization. -1 indicates that channels is the last dimension in the input shape

        # if we are using "channel first", update the input shape
        if K.image_data_format() == "channels_first":
            inputShape = (depth, height, width)
            chanDim = 1
        # CONV 1
        model.add(Conv2D(32,(5,5), strides=(1,1), padding="same", input_shape=inputShape))
        # BN 1
        model.add(BatchNormalization(axis=chanDim))
        # CONV 2
        model.add(Conv2D(64, (3,3), strides=(1,1), padding ="valid"))
        # POOL 1
        model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
        # BN 2
        model.add(BatchNormalization(axis=chanDim))
        # CONV 3
        model.add(Conv2D(128, (3,3), strides=(1,1), padding ="valid"))
        # BN 3
        model.add(BatchNormalization(axis=chanDim))
        # CONV 4
        model.add(Conv2D(192, (3,3), strides=(1,1), padding ="same"))
        # POOL 2
        model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
        # BN 4
        model.add(BatchNormalization(axis=chanDim))
        # CONV 5
        model.add(Conv2D(256, (3,3), strides=(1,1), padding ="valid"))
        # BN 5
        model.add(BatchNormalization(axis=chanDim))
        # CONV 6
        model.add(Conv2D(320, (3,3), strides=(1,1), padding ="valid"))
        # POOL 3
        model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
        # BN 6
        model.add(BatchNormalization(axis=chanDim))
        # CONV 7
        model.add(Conv2D(480, (3,3), strides=(1,1), padding ="valid"))
        # BN 7
        model.add(BatchNormalization(axis=chanDim))
        # CONV 8
        model.add(Conv2D(512, (3,3), strides=(1,1), padding ="valid"))
        # POOL 4
        model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
        # BN 8
        model.add(BatchNormalization(axis=chanDim))
        # FC 9
        model.add(Flatten())
        model.add(Dense(4096))
        # DROP 10
        model.add(Dropout(0.5))
        # FC 11
        model.add(Dense(4096))
        # DROP 12
        model.add(Dropout(0.5))
        # FC 13
        model.add(Dense(classes))
        # COST 14
        model.add(Activation("softmax"))

        # return the constructed network architecture
        return model

我没有尝试运行代码,但是我可能已经发现了您的问题。

请注意, LabelBinarizer仅为您提供与不同类一样多的列。 例如:

from sklearn import preprocessing

y = [1, 2, 6, 4, 2]
lb = preprocessing.LabelBinarizer()
lb.fit(y)

lb.transform(y)

会给你:

>>> array([[1, 0, 0, 0],
       [0, 1, 0, 0],
       [0, 0, 0, 1],
       [0, 0, 1, 0],
       [0, 1, 0, 0]])

由于只有4个唯一的类。

您可能有158个不同的类,但可能每个类都没有样本,因此最终在trainY中仅获得121列。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM