简体   繁体   中英

Keras Deep Learning ValueError: logits and labels must have the same shape ((None, 2) vs (None, 1))

I'm doing a model for identifying certain species of animal on image, right now I just have 0 and 1 categorical variable.

But when I'm training my model i get this error:

raise ValueError("logits and labels must have the same shape (%s vs %s)" %

    ValueError: logits and labels must have the same shape ((None, 2) vs (None, 1))

Here is my code

especies = [0,1,1,0,0,0,0,1,1,1,1,0]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test= train_test_split(cv_img,especies)
x_train_numpy = np.array(x_train)
x_test_numpy = np.array(x_test)
y_train_numpy = np.array(y_train)
y_test_numpy = np.array(y_test)
x_train_numpy = x_train_numpy/255
x_test_numpy = x_test_numpy/255
model = Sequential() #CREA EL MODELO QUE VA A IR RECIBIENDO CAPAS...

# primer bloque de convolucion
model.add(Conv2D(32,(3,3),padding='same',input_shape=(32,32,3),activation='relu')) # agrega solo una capa
model.add(Conv2D(32,(3,3),activation='relu')) # padding no es necesario porque ya se agrego ceros en la anterior / input shape solo se agrega al principio
model.add(MaxPool2D(pool_size=(2,2))) # reduce la cantidad de data
model.add(Dropout(0.25)) # quiero que desconecte 25% de las conexiones

# segundo bloque de convolucion
model.add(Conv2D(64,(3,3),padding='same',activation='relu')) # ya no necesito input_shape
model.add(Conv2D(64,(3,3),activation='relu')) # padding no es necesario porque ya se agrego ceros en la anterior / input shape solo se agrega al principio
model.add(MaxPool2D(pool_size=(2,2))) # reduce la cantidad de data
model.add(Dropout(0.25)) # quiero que desconecte 25% de las conexiones

# TERCE BLOQUE DE CONVOLUCION
model.add(Conv2D(64,(3,3),padding='same',activation='relu')) # input shape solo se usa CUANDO la capa anterior es la data en BRUTO
model.add(Conv2D(64,(3,3),activation='relu')) # SEGUNDA CAPA
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))

# capa final
model.add(Flatten()) # esto es solo para transicion de la convolucion a  la capa densa
model.add(Dense(512,activation='relu')) # esta es un capa densa comun y corriente
model.add(Dropout(0.5)) # desconectar el 50% de conexiones
model.add(Dense(2,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(x_train_numpy,y_train_numpy,epochs=10,batch_size=32, validation_data=(x_test_numpy,y_test_numpy),shuffle=True)

You're using binary_crossentropy . So, the output layer of your model should contain only 1 neuron. The calculation is, if the output value is greater than 0.5, it's 1. Otherwise, the output is 0. you can also tune that threshold though.

To fix your problem, please change the following line

model.add(Dense(2,activation='sigmoid'))

to

model.add(Dense(1,activation='sigmoid'))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM