[英]ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 33714, 12), found shape=(None, 12)
I am trying to run a simple RNN with some data extracted from a csv file.我正在尝试使用从 csv 文件中提取的一些数据运行一个简单的 RNN。 I have already preprocessed my data and split them into train set and validation set, but I get the error above.
我已经预处理了我的数据并将它们分成训练集和验证集,但我得到了上面的错误。 This is my network structure and what I tryied so far.
这是我的网络结构,也是我到目前为止所尝试的。 My shapes are (33714,12) for x_train, (33714,) for y_train, (3745,12) for x_val and (3745,) for y_val.
我的形状是 x_train 的 (33714,12),y_train 的 (33714,),x_val 的 (3745,12) 和 y_val 的 (3745,)。
model = Sequential()
# LSTM LAYER IS ADDED TO MODEL WITH 128 CELLS IN IT
model.add(LSTM(128, input_shape=x_train.shape, activation='tanh', return_sequences=True))
model.add(Dropout(0.2)) # 20% DROPOUT ADDED FOR REGULARIZATION
model.add(BatchNormalization())
model.add(LSTM(128, input_shape=x_train.shape, activation='tanh', return_sequences=True)) # ADD ANOTHER LAYER
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(LSTM(128, input_shape=x_train.shape, activation='tanh', return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(32, activation='relu')) # ADD A DENSE LAYER
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax')) # FINAL CLASSIFICATION LAYER WITH 2 CLASSES AND SOFTMAX
# ---------------------------------------------------------------------------------------------------
# OPTIMIZER SETTINGS
opt = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE, decay=DECAY)
# MODEL COMPILE
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# CALLBACKS
tensorboard = TensorBoard(log_dir=f"logs/{NAME}")
filepath = "RNN_Final-{epoch:02d}-{val_acc:.3f}"
checkpoint = ModelCheckpoint("models/{}.model".format(filepath, monitor='val_acc', verbose=1,
save_best_only=True, mode='max')) # save only the best ones
# RUN THE MODEL
history = model.fit(x_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE,
validation_data=(x_val, y_val), callbacks=[tensorboard, checkpoint])
Though it will give you a large value, what may be best to do would be to flatten the one with the larger dimension.虽然它会给你一个很大的价值,但最好的办法是压平具有更大尺寸的那个。
A tensorflow.keras.layers.Flatten() will basically make your output shape the values multiplied, ie input: (None, 5, 5) -> Flatten() -> (None, 25) A tensorflow.keras.layers.Flatten() 基本上会使您的 output 将值相乘,即输入:(无 -> 5N,1)()
For your example, this will give you:对于您的示例,这将为您提供:
(None, 33714,12) -> (None, 404568). (无,33714,12)->(无,404568)。
I'm not entirely sure if this will work when you change the shape sizes, but that is how I overcame my issue with incompatible shapes: expected: (None, x), got: (None, y, x).我不完全确定当您更改形状大小时这是否会起作用,但这就是我克服形状不兼容问题的方法:预期:(无,x),得到:(无,y,x)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.