簡體   English   中英

GlobalAvgPool1D 與 output 大小不兼容

[英]GlobalAvgPool1D incompatible with output size

我的輸入形狀是(150,10,1),我的 output 具有相同的形狀(150,10,1)。 我的問題是多分類(3類)。 使用 np_utils.to_categorical(Ytrain) 后,output 形狀將是 (150,10,3),非常完美。 然而,在使用 GlobalAvgPool1D() 建模的過程中,它給出了錯誤:“一個形狀為 (150, 10, 3) 的目標數組被傳遞給一個形狀為 (None, 3) 的 output,同時用作損失categorical_crossentropy 。這種損失預計目標具有與輸出相同的形狀”。 我應該如何修復它?

我的代碼:

nput_size = (150, 10, 1)
Xtrain = np.random.randint(0, 100, size=(150, 10, 1))

Ytrain = np.random.choice([0,1, 2], size=(150, 10,1))
Ytrain = np_utils.to_categorical(Ytrain)

input_shape = (10, 1)
input_layer = tf.keras.layers.Input(input_shape)
conv_x = tf.keras.layers.Conv1D(filters=32, kernel_size=10, strides = 1, padding='same')(input_layer)

conv_x = tf.keras.layers.BatchNormalization()(conv_x)
conv_x = tf.keras.layers.Activation('relu')(conv_x)
g_pool = tf.keras.layers.GlobalAvgPool1D()(conv_x)
output_layer = tf.keras.layers.Dense(3, activation='softmax')(g_pool)
model = tf.keras.models.Model(inputs= input_layer, outputs = output_layer) 
model.summary()

model.compile(loss='categorical_crossentropy', optimizer= tf.keras.optimizers.Adam(), 
          metrics='accuracy'])
hist = model.fit(Xtrain, Ytrain, batch_size= 5, epochs= 10, verbose= 0)

當我在 Google colab 中的 Tensorflow 版本2.2.0中運行您的代碼時,出現以下錯誤 - ValueError: Shapes (5, 10, 3) and (5, 3) are incompatible

您收到此錯誤是因為標簽Ytrain數據的形狀為(150, 10, 3)而不是(150, 3)

由於您的標簽的形狀為(None,3) ,因此您的輸入也應該是相同的。即(Number of records, 3) 修改后我能夠成功運行您的代碼,

Ytrain = np.random.choice([0,1, 2], size=(150, 10,1))

Ytrain = np.random.choice([0,1, 2], size=(150, 1))

np_utils.to_categorical為標簽添加了 3 列,從而形成了我們的 model 期望的(150,3)形狀。

固定代碼 -

import tensorflow as tf
print(tf.__version__)
import numpy as np
from tensorflow.keras import utils as np_utils

Xtrain = np.random.randint(0, 100, size=(150, 10, 1))

Ytrain = np.random.choice([0,1, 2], size=(150, 1))
Ytrain = np_utils.to_categorical(Ytrain)

print(Ytrain.shape)

input_shape = (10, 1)
input_layer = tf.keras.layers.Input(input_shape)
conv_x = tf.keras.layers.Conv1D(filters=32, kernel_size=10, strides = 1, padding='same')(input_layer)

conv_x = tf.keras.layers.BatchNormalization()(conv_x)
conv_x = tf.keras.layers.Activation('relu')(conv_x)
g_pool = tf.keras.layers.GlobalAvgPool1D()(conv_x)
output_layer = tf.keras.layers.Dense(3, activation='softmax')(g_pool)
model = tf.keras.models.Model(inputs= input_layer, outputs = output_layer) 
model.summary()

model.compile(loss='categorical_crossentropy', optimizer= tf.keras.optimizers.Adam(), 
          metrics=['accuracy'])
hist = model.fit(Xtrain, Ytrain, batch_size= 5, epochs= 10, verbose= 0)

print("Ran Successfully")

Output -

2.2.0
(150, 3)
Model: "model_13"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_21 (InputLayer)        [(None, 10, 1)]           0         
_________________________________________________________________
conv1d_9 (Conv1D)            (None, 10, 32)            352       
_________________________________________________________________
batch_normalization_15 (Batc (None, 10, 32)            128       
_________________________________________________________________
activation_9 (Activation)    (None, 10, 32)            0         
_________________________________________________________________
global_average_pooling1d_9 ( (None, 32)                0         
_________________________________________________________________
dense_14 (Dense)             (None, 3)                 99        
=================================================================
Total params: 579
Trainable params: 515
Non-trainable params: 64
_________________________________________________________________
Ran Successfully

希望這能回答你的問題。 快樂學習。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM