简体   繁体   English

GlobalAvgPool1D 与 output 大小不兼容

[英]GlobalAvgPool1D incompatible with output size

My input shape is (150,10,1) and my output has the same shape (150,10,1).我的输入形状是(150,10,1),我的 output 具有相同的形状(150,10,1)。 My problem is multi-classification (3 classes).我的问题是多分类(3类)。 After using np_utils.to_categorical(Ytrain) the output shape will be (150,10,3) which is perfect.使用 np_utils.to_categorical(Ytrain) 后,output 形状将是 (150,10,3),非常完美。 However during the process of modelling with GlobalAvgPool1D(), it gives the error: "A target array with shape (150, 10, 3) was passed for an output of shape (None, 3) while using as loss categorical_crossentropy . This loss expects targets to have the same shape as the output".然而,在使用 GlobalAvgPool1D() 建模的过程中,它给出了错误:“一个形状为 (150, 10, 3) 的目标数组被传递给一个形状为 (None, 3) 的 output,同时用作损失categorical_crossentropy 。这种损失预计目标具有与输出相同的形状”。 How should I fix it?我应该如何修复它?

My codes:我的代码:

nput_size = (150, 10, 1)
Xtrain = np.random.randint(0, 100, size=(150, 10, 1))

Ytrain = np.random.choice([0,1, 2], size=(150, 10,1))
Ytrain = np_utils.to_categorical(Ytrain)

input_shape = (10, 1)
input_layer = tf.keras.layers.Input(input_shape)
conv_x = tf.keras.layers.Conv1D(filters=32, kernel_size=10, strides = 1, padding='same')(input_layer)

conv_x = tf.keras.layers.BatchNormalization()(conv_x)
conv_x = tf.keras.layers.Activation('relu')(conv_x)
g_pool = tf.keras.layers.GlobalAvgPool1D()(conv_x)
output_layer = tf.keras.layers.Dense(3, activation='softmax')(g_pool)
model = tf.keras.models.Model(inputs= input_layer, outputs = output_layer) 
model.summary()

model.compile(loss='categorical_crossentropy', optimizer= tf.keras.optimizers.Adam(), 
          metrics='accuracy'])
hist = model.fit(Xtrain, Ytrain, batch_size= 5, epochs= 10, verbose= 0)

When I ran your code in Tensorflow Version 2.2.0 in Google colab, I got the following error - ValueError: Shapes (5, 10, 3) and (5, 3) are incompatible .当我在 Google colab 中的 Tensorflow 版本2.2.0中运行您的代码时,出现以下错误 - ValueError: Shapes (5, 10, 3) and (5, 3) are incompatible

You are getting this error because, the labels Ytrain data is having the shape of (150, 10, 3) instead of (150, 3) .您收到此错误是因为标签Ytrain数据的形状为(150, 10, 3)而不是(150, 3)

As your labels are having shape of (None,3) , your input also should be same.ie (Number of records, 3) .由于您的标签的形状为(None,3) ,因此您的输入也应该是相同的。即(Number of records, 3) I was able to run your code successfully after modifying,修改后我能够成功运行您的代码,

Ytrain = np.random.choice([0,1, 2], size=(150, 10,1))

to

Ytrain = np.random.choice([0,1, 2], size=(150, 1))

np_utils.to_categorical adds the 3 columns for labels thus making the shape of (150,3) which our model expects. np_utils.to_categorical为标签添加了 3 列,从而形成了我们的 model 期望的(150,3)形状。

Fixed Code -固定代码 -

import tensorflow as tf
print(tf.__version__)
import numpy as np
from tensorflow.keras import utils as np_utils

Xtrain = np.random.randint(0, 100, size=(150, 10, 1))

Ytrain = np.random.choice([0,1, 2], size=(150, 1))
Ytrain = np_utils.to_categorical(Ytrain)

print(Ytrain.shape)

input_shape = (10, 1)
input_layer = tf.keras.layers.Input(input_shape)
conv_x = tf.keras.layers.Conv1D(filters=32, kernel_size=10, strides = 1, padding='same')(input_layer)

conv_x = tf.keras.layers.BatchNormalization()(conv_x)
conv_x = tf.keras.layers.Activation('relu')(conv_x)
g_pool = tf.keras.layers.GlobalAvgPool1D()(conv_x)
output_layer = tf.keras.layers.Dense(3, activation='softmax')(g_pool)
model = tf.keras.models.Model(inputs= input_layer, outputs = output_layer) 
model.summary()

model.compile(loss='categorical_crossentropy', optimizer= tf.keras.optimizers.Adam(), 
          metrics=['accuracy'])
hist = model.fit(Xtrain, Ytrain, batch_size= 5, epochs= 10, verbose= 0)

print("Ran Successfully")

Output - Output -

2.2.0
(150, 3)
Model: "model_13"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_21 (InputLayer)        [(None, 10, 1)]           0         
_________________________________________________________________
conv1d_9 (Conv1D)            (None, 10, 32)            352       
_________________________________________________________________
batch_normalization_15 (Batc (None, 10, 32)            128       
_________________________________________________________________
activation_9 (Activation)    (None, 10, 32)            0         
_________________________________________________________________
global_average_pooling1d_9 ( (None, 32)                0         
_________________________________________________________________
dense_14 (Dense)             (None, 3)                 99        
=================================================================
Total params: 579
Trainable params: 515
Non-trainable params: 64
_________________________________________________________________
Ran Successfully

Hope this answers your question.希望这能回答你的问题。 Happy Learning.快乐学习。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM