[英]ValueError: Input 0 of layer sequential_40 is incompatible with the layer
I'm modifying an old code, by adding attention layer to a model.我正在修改旧代码,将注意力层添加到 model。 But I'm not able to figure out how to stack the layers with correct input size.
但我无法弄清楚如何以正确的输入大小堆叠图层。
The actual input data would be (200,189,1).实际输入数据为 (200,189,1)。
//I'm trying something like this //我正在尝试这样的事情
def mocap_model(optimizer='SGD'):
model = Sequential()
model.add(Conv2D(32, 3, strides=(2, 2), padding ='same', input_shape=(200, 189, 1)))
model.add(Dropout(0.2))
model.add(Activation('relu'))
model.add(Conv2D(64, 3, strides=(2, 2), padding ='same'))
model.add(Dropout(0.2))
model.add(Activation('relu'))
model.add(Conv2D(64, 3, strides=(2, 2), padding ='same'))
model.add(Dropout(0.2))
model.add(Activation('relu'))
model.add(Conv2D(128, 3, strides=(2, 2), padding ='same'))
model.add(Dropout(0.2))
model.add(Flatten())
return model
cnn = mocap_model()
main_input = Input(shape=(200, 189, 1))
rnn = Sequential()
rnn = LSTM(256, return_sequences=True, input_shape=(200,189))
model = TimeDistributed(cnn)(main_input)
model = rnn(model)
att_in=LSTM(256,return_sequences=True,dropout=0.3,recurrent_dropout=0.2)(model)
att_out=attention()(att_in)
output3=Dense(256,activation='relu',trainable=True)(att_out)
output4=Dense(4,activation='softmax',trainable=True)(output3)
model=Model(main_input,output4)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
But I get this error:但我得到这个错误:
----> 8 model = TimeDistributed(cnn)(main_input)
ValueError: Input 0 of layer sequential_40 is incompatible with the layer: : expected min_ndim=4, found ndim=3. ValueError:layersequential_40 的输入 0 与 layer 不兼容::预期 min_ndim=4,发现 ndim=3。 Full shape received: (None, 189, 1)
收到的完整形状:(无,189,1)
Problem with Input shape.输入形状有问题。
tf.keras.layers.TimeDistributed
expects batch size as input. tf.keras.layers.TimeDistributed
期望批量大小作为输入。 Expects inputs: Input tensor of shape (batch, time, ...)
.期望输入:形状的输入张量
(batch, time, ...)
。 In the main_input add batch_size在 main_input 添加 batch_size
main_input = Input(shape=(10, 200, 189, 1))
Working sample code工作示例代码
import tensorflow as tf
cnn = tf.keras.Sequential()
cnn.add(tf.keras.layers.Conv2D(64, 1, 1, input_shape=(200, 189, 1)))
cnn.add(tf.keras.layers.Flatten())
cnn.output_shape
main_input = tf.keras.Input(shape=(10, 200, 189, 1))
outputs = tf.keras.layers.TimeDistributed(cnn)(main_input)
outputs.shape
Output Output
TensorShape([None, 10, 2419200])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.