简体   繁体   中英

Dimensional Error for text classification using conv2d layer in keras

I have a dataframe which I split into train and test set and the input shape for the train set is (4115,588). Now I want to create a neural network with Conv2D layers but face this error when I pass in the input shape arguement. ValueError: Input 0 of layer sequential_8 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: (None, 588, 1) I tried the following steps:

X_train = X_train.to_numpy()
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1],1))

model = Sequential()
model.add(Conv2D(128, kernel_size=(3,3), input_shape=(X_train.shape[0],588,1), 
activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(32, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))

Can someone guide me on how to solve this error. I am relatively new to this topic.

Conv2D expects input of shape, 4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'.

I tested your code with mnist dataset its working. Working sample code

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten
import tensorflow as tf

(X_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
#X_train = X_train.to_numpy()
#X_train = X_train.reshape((X_train.shape[0], X_train.shape[1],1))


model = tf.keras.Sequential()
model.add(Conv2D(128, kernel_size=(3,3), input_shape=(28,28,1), 
activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(32, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))


model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              optimizer=tf.keras.optimizers.Adam(),
              metrics=['accuracy'])
model.fit(X_train,
          y_train,
          batch_size=128,
          epochs=1,
          verbose=1)

Output

469/469 [==============================] - 137s 289ms/step - loss: -10766237696.0000 - accuracy: 0.1124

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM