[英]Created a Keras deep learning model using Embedding layer but returned an error while training
[英]Error while training a deep learning model
所以我設計了一個CNN並用以下參數編譯,
training_file_loc = "8-SignLanguageMNIST/sign_mnist_train.csv"
testing_file_loc = "8-SignLanguageMNIST/sign_mnist_test.csv"
def getData(filename):
images = []
labels = []
with open(filename) as csv_file:
file = csv.reader(csv_file, delimiter = ",")
next(file, None)
for row in file:
label = row[0]
data = row[1:]
img = np.array(data).reshape(28,28)
images.append(img)
labels.append(label)
images = np.array(images).astype("float64")
labels = np.array(labels).astype("float64")
return images, labels
training_images, training_labels = getData(training_file_loc)
testing_images, testing_labels = getData(testing_file_loc)
print(training_images.shape, training_labels.shape)
print(testing_images.shape, testing_labels.shape)
training_images = np.expand_dims(training_images, axis = 3)
testing_images = np.expand_dims(testing_images, axis = 3)
training_datagen = ImageDataGenerator(
rescale = 1/255,
rotation_range = 45,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = "nearest"
)
training_generator = training_datagen.flow(
training_images,
training_labels,
batch_size = 64,
)
validation_datagen = ImageDataGenerator(
rescale = 1/255,
rotation_range = 45,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = "nearest"
)
validation_generator = training_datagen.flow(
testing_images,
testing_labels,
batch_size = 64,
)
model = tf.keras.Sequential([
keras.layers.Conv2D(16, (3, 3), input_shape = (28, 28, 1), activation = "relu"),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Conv2D(32, (3, 3), activation = "relu"),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Flatten(),
keras.layers.Dense(256, activation = "relu"),
keras.layers.Dropout(0.25),
keras.layers.Dense(512, activation = "relu"),
keras.layers.Dropout(0.25),
keras.layers.Dense(26, activation = "softmax")
])
model.compile(
loss = "categorical_crossentropy",
optimizer = RMSprop(lr = 0.001),
metrics = ["accuracy"]
)
但是,當我運行 model.fit() 時,出現以下錯誤,
ValueError: Shapes (None, 1) and (None, 24) are incompatible
將損失 function 更改為sparse_categorical_crossentropy
,程序運行良好。
我不明白為什么會這樣。
誰能解釋這一點以及這些損失函數之間的區別?
問題是, categorical_crossentropy
需要一個熱編碼標簽,這意味着,對於每個樣本,它需要一個長度為num_classes
的張量,其中label
th 元素設置為 1,其他所有內容為 0。
另一方面, sparse_categorical_crossentropy
直接使用 integer 標簽(因為這里的用例是大量的類,所以單熱編碼的 label 會浪費 ZCD69B4957619BF060個零。) 我相信,但我無法證實這一點, categorical_crossentropy
比其稀疏對應物運行得更快。
對於您的情況,我建議使用 26 個類,使用非稀疏版本並將您的標簽轉換為一次性編碼,如下所示:
def getData(filename):
images = []
labels = []
with open(filename) as csv_file:
file = csv.reader(csv_file, delimiter = ",")
next(file, None)
for row in file:
label = row[0]
data = row[1:]
img = np.array(data).reshape(28,28)
images.append(img)
labels.append(label)
images = np.array(images).astype("float64")
labels = np.array(labels).astype("float64")
return images, tf.keras.utils.to_categorical(labels, num_classes=26) # you can omit num_classes to have it computed from the data
旁注:除非您有理由將float64
用於圖像,否則我會切換到float32
(它將數據集所需的 memory 減半,並且 model 可能會將它們轉換為float32
作為第一個操作)
很簡單,對於您的 output 類是整數 sparse_categorical_crosentropy 的分類問題,用於標簽轉換為一個熱編碼標簽的分類問題,我們使用 categorical_crosentropy。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.