繁体   English   中英

Tensorflow:ValueError:形状(None,1)和(None,2)不兼容

[英]Tensorflow: ValueError: Shapes (None, 1) and (None, 2) are incompatible

我对神经网络和机器学习非常陌生,但是,我为云和无云创建了训练数据,并使用了与我的一个手势项目相同的模型。 当我使用这个模型时,我最初遇到了类似的错误消息,它说:

ValueError: Shapes (64, 10) and (64, 4) are incompatible

我过去使用相同的模型来处理一些手势代码。 过去我收到过类似的错误消息,因为据我了解,训练数据只有 4 个选项,而我的模型试图拟合 10 个选项。 并且不得不将神经元的最终数量从 10 个更改为 4 个。

import numpy as np 
import matplotlib.pyplot as plt 
import os
import cv2
import random
from tqdm import tqdm
import tensorflow as tf 
from keras import layers
from keras import models
#Loading the traning data
DATADIR = "D:/Python_Code/Machine_learning/Cloud_Trainin_Data"
CATEGORIES = ["Clouds", "Normal_Terrain"]
training_data = []
for category in CATEGORIES:  # do dogs and cats
    path = os.path.join(DATADIR,category)  # create path to dogs and cats
    class_num = CATEGORIES.index(category)  # get the classification  (0 or a 1). 0=dog 1=cat
    for img in tqdm(os.listdir(path)):  # iterate over each image per dogs and cats       
        img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)  # convert to array
        height = 1000
        dim = None
        (h, w) = img_array.shape[:2]
        r = height / float(h)
        dim = (int(w * r), height)
        resized = cv2.resize(img_array, dim, interpolation = cv2.INTER_AREA)
        training_data.append([resized, class_num])  # add this to our training_data

print(len(training_data))
random.shuffle(training_data)
for sample in training_data[:10]:
    print(sample[1])

X = []
y = []

for features,label in training_data:
    X.append(features)
    y.append(label)

hh,ww = resized.shape
X = np.array(X).reshape(-1, hh, ww, 1)
y = np.array(y)
#Normalizing the data
X = X/255.0

#Building the Model
model=models.Sequential()
model.add(layers.Conv2D(32, (5, 5), strides=(2, 2), activation='relu', input_shape=X.shape[1:])) 
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu')) 
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(2, activation='softmax'))
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
EPOCHS = 1
history = model.fit(X, y, batch_size = 5, epochs=EPOCHS, validation_split=0.1)
accuray, loss = model.evaluate(X, y)

print(accuray, loss)
model.save('Clouds.model')
loss = history.history["loss"]
acc = history.history["accuracy"]
epoch = np.arange(EPOCHS)
plt.plot(epoch, loss)
# plt.plot(epoch, val_loss)
plt.plot(epoch, acc, color='red')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')

plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Training Loss')
plt.legend(['train', 'val'])
plt.show()

但是在这种情况下,当我应用相同的解决方案时,错误消息不断出现。

有什么建议?

看起来您正在尝试实现二进制分类问题。 正如@Tfer2 建议的那样,将损失函数categorical_crossentropy更改为binary_crossentropy

工作示例代码

import tensorflow as tf
import numpy as np
from tensorflow.keras import datasets
import tensorflow.keras as keras

train_images, test_images = train_images / 255.0, test_images / 255.0

#input_shape=(X_train.shape[0],X_train.shape[1],X_train.shape[2])
input_shape = (32, 32, 3)

model = tf.keras.Sequential()
#first layer
model.add(keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=input_shape))
model.add(keras.layers.MaxPool2D((3,3),strides=(2,2),padding='same'))

#second layer
model.add(keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=input_shape))
model.add(keras.layers.MaxPool2D((3,3),strides=(2,2),padding='same'))
#third layer
model.add(keras.layers.Conv2D(64,(2,2),activation='relu',input_shape=input_shape))
model.add(keras.layers.MaxPool2D((2,2),strides=(2,2),padding='same'))
#flatten
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128,activation='relu'))
model.add(keras.layers.Dropout(0.3))
#output
model.add(keras.layers.Dense(2,activation='softmax'))

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])
model.fit(train_images,train_labels,validation_data=(test_images,test_labels),batch_size=32,epochs=50)

输出

Epoch 1/50
1563/1563 [==============================] - 25s 10ms/step - loss: 1.8524 - accuracy: 0.3163 - val_loss: 1.5800 - val_accuracy: 0.4311
Epoch 2/50
1563/1563 [==============================] - 15s 9ms/step - loss: 1.5516 - accuracy: 0.4329 - val_loss: 1.4234 - val_accuracy: 0.4886
Epoch 3/50
1563/1563 [==============================] - 17s 11ms/step - loss: 1.4365 - accuracy: 0.4789 - val_loss: 1.3575 - val_accuracy: 0.5111
Epoch 4/50
1563/1563 [==============================] - 14s 9ms/step - loss: 1.3624 - accuracy: 0.5098 - val_loss: 1.2803 - val_accuracy: 0.5471
Epoch 5/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.3069 - accuracy: 0.5322 - val_loss: 1.2305 - val_accuracy: 0.5663
Epoch 6/50
1563/1563 [==============================] - 12s 8ms/step - loss: 1.2687 - accuracy: 0.5471 - val_loss: 1.1839 - val_accuracy: 0.5796
Epoch 7/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.2243 - accuracy: 0.5668 - val_loss: 1.1430 - val_accuracy: 0.5940
Epoch 8/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.1891 - accuracy: 0.5800 - val_loss: 1.1261 - val_accuracy: 0.6061
Epoch 9/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.1568 - accuracy: 0.5916 - val_loss: 1.0998 - val_accuracy: 0.6157
Epoch 10/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.1219 - accuracy: 0.6053 - val_loss: 1.0769 - val_accuracy: 0.6210
Epoch 11/50
1563/1563 [==============================] - 12s 8ms/step - loss: 1.0993 - accuracy: 0.6148 - val_loss: 1.0369 - val_accuracy: 0.6335
Epoch 12/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.0709 - accuracy: 0.6232 - val_loss: 1.0119 - val_accuracy: 0.6463
Epoch 13/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.0473 - accuracy: 0.6302 - val_loss: 0.9964 - val_accuracy: 0.6516
Epoch 14/50
1563/1563 [==============================] - 13s 8ms/step - loss: 1.0252 - accuracy: 0.6419 - val_loss: 0.9782 - val_accuracy: 0.6587
Epoch 15/50
1563/1563 [==============================] - 12s 8ms/step - loss: 1.0035 - accuracy: 0.6469 - val_loss: 0.9569 - val_accuracy: 0.6644
Epoch 16/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.9836 - accuracy: 0.6572 - val_loss: 0.9586 - val_accuracy: 0.6633
Epoch 17/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.9656 - accuracy: 0.6614 - val_loss: 0.9192 - val_accuracy: 0.6790
Epoch 18/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.9506 - accuracy: 0.6679 - val_loss: 0.9133 - val_accuracy: 0.6781
Epoch 19/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.9273 - accuracy: 0.6756 - val_loss: 0.9046 - val_accuracy: 0.6824
Epoch 20/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.9129 - accuracy: 0.6795 - val_loss: 0.8855 - val_accuracy: 0.6910
Epoch 21/50
1563/1563 [==============================] - 14s 9ms/step - loss: 0.8924 - accuracy: 0.6873 - val_loss: 0.8886 - val_accuracy: 0.6927
Epoch 22/50
1563/1563 [==============================] - 16s 10ms/step - loss: 0.8840 - accuracy: 0.6905 - val_loss: 0.8625 - val_accuracy: 0.7013
Epoch 23/50
1563/1563 [==============================] - 15s 9ms/step - loss: 0.8655 - accuracy: 0.6980 - val_loss: 0.8738 - val_accuracy: 0.6950
Epoch 24/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.8543 - accuracy: 0.7019 - val_loss: 0.8454 - val_accuracy: 0.7064
Epoch 25/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.8388 - accuracy: 0.7056 - val_loss: 0.8354 - val_accuracy: 0.7063
Epoch 26/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.8321 - accuracy: 0.7115 - val_loss: 0.8244 - val_accuracy: 0.7161
Epoch 27/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.8169 - accuracy: 0.7163 - val_loss: 0.8390 - val_accuracy: 0.7084
Epoch 28/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.8071 - accuracy: 0.7190 - val_loss: 0.8372 - val_accuracy: 0.7127
Epoch 29/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.7949 - accuracy: 0.7219 - val_loss: 0.7990 - val_accuracy: 0.7217
Epoch 30/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.7861 - accuracy: 0.7273 - val_loss: 0.7940 - val_accuracy: 0.7281
Epoch 31/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.7750 - accuracy: 0.7299 - val_loss: 0.7933 - val_accuracy: 0.7262
Epoch 32/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.7635 - accuracy: 0.7373 - val_loss: 0.7964 - val_accuracy: 0.7254
Epoch 33/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.7537 - accuracy: 0.7361 - val_loss: 0.7891 - val_accuracy: 0.7259
Epoch 34/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.7460 - accuracy: 0.7410 - val_loss: 0.7893 - val_accuracy: 0.7257
Epoch 35/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.7366 - accuracy: 0.7448 - val_loss: 0.7713 - val_accuracy: 0.7332
Epoch 36/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.7275 - accuracy: 0.7492 - val_loss: 0.8443 - val_accuracy: 0.7095
Epoch 37/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.7257 - accuracy: 0.7478 - val_loss: 0.7583 - val_accuracy: 0.7365
Epoch 38/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.7097 - accuracy: 0.7535 - val_loss: 0.7497 - val_accuracy: 0.7458
Epoch 39/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.7091 - accuracy: 0.7554 - val_loss: 0.7588 - val_accuracy: 0.7370
Epoch 40/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.6945 - accuracy: 0.7576 - val_loss: 0.7583 - val_accuracy: 0.7411
Epoch 41/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.6888 - accuracy: 0.7592 - val_loss: 0.7481 - val_accuracy: 0.7408
Epoch 42/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.6829 - accuracy: 0.7634 - val_loss: 0.7372 - val_accuracy: 0.7456
Epoch 43/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.6742 - accuracy: 0.7665 - val_loss: 0.7324 - val_accuracy: 0.7475
Epoch 44/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.6646 - accuracy: 0.7679 - val_loss: 0.7444 - val_accuracy: 0.7425
Epoch 45/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.6613 - accuracy: 0.7686 - val_loss: 0.7294 - val_accuracy: 0.7506
Epoch 46/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.6499 - accuracy: 0.7712 - val_loss: 0.7335 - val_accuracy: 0.7470
Epoch 47/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.6446 - accuracy: 0.7759 - val_loss: 0.7223 - val_accuracy: 0.7544
Epoch 48/50
1563/1563 [==============================] - 12s 8ms/step - loss: 0.6376 - accuracy: 0.7793 - val_loss: 0.7259 - val_accuracy: 0.7496
Epoch 49/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.6341 - accuracy: 0.7803 - val_loss: 0.7705 - val_accuracy: 0.7355
Epoch 50/50
1563/1563 [==============================] - 13s 8ms/step - loss: 0.6234 - accuracy: 0.7820 - val_loss: 0.7116 - val_accuracy: 0.7562

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM