简体   繁体   中英

Keras ValueError: Input 0 is incompatible with layer conv_lst_m2d_16: expected ndim=5, found ndim=4

I am trying to classify sequences of images into 2 classes. Each sequence has 5 frames. I have used ConvLSTM2D as the first layer and I'm getting the above error. the input_shape parameter is input_shape = (timesteps, rows, columns, channels) .

The data which I've generated is of this format:

self.data = np.random.random((self.number_of_samples, 
                                  self.timesteps,
                                  self.rows,
                                  self.columns,
                                  self.channels)) 

and the first layer is implemented as shown below:

model = Sequential()

# time distributed is used - working frame by frame
model.add(ConvLSTM2D(filters=10,
                     input_shape=input_shape,
                     kernel_size=(3, 3),
                     activation='relu',
                     data_format="channels_last"))

Can anyone please help me with this?

Edit: Here is my toying code:

import numpy as np
from keras.layers import Dense, Dropout, LSTM
from keras.layers import Conv2D, Flatten, ConvLSTM2D
from keras.models import Sequential
from keras.layers.wrappers import TimeDistributed
import time


class Classifier():
    """Classifier model to classify image sequences"""

    def __init__(self, number_of_samples, timesteps, rows, columns, channels, epochs, batch_size):
        self.number_of_samples = number_of_samples
        self.rows = rows
        self.columns = columns
        self.timesteps = timesteps
        self.channels = channels
        self.model = None
        self.data = []
        self.labels = []
        self.epochs = epochs
        self.batch_size = batch_size
        self.X_train = []
        self.X_test = []
        self.y_train = []
        self.y_test = []

    def build_model(self, input_shape, output_label_size):
        """Builds the classification model

        Keyword arguments:
            input_shape -- shape of the image array
            output_label_size -- 1
        """
        # initialize a sequential model
        model = Sequential()

        # time distributed is used - working frame by frame
        model.add(ConvLSTM2D(filters=10,
                             input_shape=input_shape,
                             kernel_size=(3, 3),
                             activation='relu',
                             data_format="channels_last"))
        print("output shape 1:{}".format(model.output_shape))
        print("correct till here")

        model.add(Dropout(0.2))
        model.add(ConvLSTM2D(filters=5,
                             kernel_size=(3, 3),
                             activation='relu'))
        print("correct till here")

        model.add(Dropout(0.2))
        model.add(Flatten())
        # print("output shape 2:{}".format(model.output_shape))
        model.add(LSTM(10))
        print("correct till here")
        # print("output shape 3:{}".format(model.output_shape))
        model.add(Dropout(0.2))
        model.add(LSTM(5))
        model.add(Dropout(0.2))
        # print("output shape 4:{}".format(model.output_shape))
        model.add(Dense(output_label_size,
                        kernel_initializer='uniform',
                        bias_initializer='zeros',
                        activation='sigmoid'))
        model.compile(optimizer='adam', loss='binary_crossentropy')
        print("correct till here")
        # model.summary()

        self.model = model

        print("[INFO] Classifier model generated")

    def split_data(self, data, labels):
        """Returns training and test set after splitting

        Keyword arguments:
            data -- image data
            labels -- 0 or 1
        """

        print("[INFO] split the data into training and testing sets")
        train_test_split = 0.9

        # split the data into train and test sets
        split_index = int(train_test_split * self.number_of_samples)
        # shuffled_indices = np.random.permutation(self.number_of_samples)
        indices = np.arange(self.number_of_samples)
        train_indices = indices[0:split_index]
        test_indices = indices[split_index:]

        X_train = data[train_indices, :, :]
        X_test = data[test_indices, :, :]
        y_train = labels[train_indices]
        y_test = labels[test_indices]

        print('Input shape: ', input_shape)
        print('X_train shape: ', X_train.shape)
        print('X_train[0] shape: ', X_train[0].shape)
        print('X_train[0][0] shape: ', X_train[0][0].shape)
        # print('y_train shape: ', y_train.shape)
        # print('X_test shape: ', X_test.shape)
        # print('y_test shape: ', y_test.shape)

        return X_train, X_test, y_train, y_test

    def load_training_data(self):
        """Load the training data for building the classification model."""

        self.data = np.random.random((self.number_of_samples,
                                      self.timesteps,
                                      self.rows,
                                      self.columns,
                                      self.channels))
        print("shape 1", type(self.data))
        print("shape 2", type(self.data[0]))
        print("shape 3", type(self.data[0][0]))

        # self.labels = np.zeros(self.number_of_samples)
        self.labels = np.ones(self.number_of_samples)

        X_train, X_test, y_train, y_test = self.split_data(self.data, self.labels)

        self.X_train = X_train
        self.X_test = X_test
        self.y_train = y_train
        self.y_test = y_test

        print("loading the training data done")

    def train_model(self):
        """Train the model

        Keyword arguments:
            epochs -- number of training iterations
            batch_size -- number of samples per batch
        """

        self.model.fit(x=self.X_train,
                       y=self.y_train,
                       batch_size=self.batch_size,
                       epochs=self.epochs,
                       verbose=1,
                       validation_data=(self.X_test, self.y_test))

        score = self.model.evaluate(self.X_test, self.y_test,
                                    verbose=1, batch_size=self.batch_size)

        prediction = self.model.predict(self.X_test,
                                        batch_size=self.batch_size,
                                        verbose=1)
        print("Loss:{}".format(score))
        print("Prediction:{}".format(prediction))


if __name__ == "__main__":
    start = time.time()
    number_of_samples = 12
    # number_of_test_samples = 2000
    timesteps = 5
    rows = 14
    columns = 14
    channels = 3
    output_label_size = 1
    epochs = 1
    batch_size = 1
    input_shape = (timesteps, rows, columns, channels)
    # input_shape = (batch_size, timesteps, rows, columns, channels)

    classifier_model = Classifier(number_of_samples,
                                  timesteps,
                                  rows,
                                  columns,
                                  channels,
                                  epochs,
                                  batch_size)

    classifier_model.load_training_data()
    classifier_model.build_model(input_shape, output_label_size)
    classifier_model.train_model()
    end = time.time()

    print("total time:{}".format(end - start))

There are several ways to specify the input shape . From the documentation:

Pass an input_shape argument to the first layer. This is a shape tuple (a tuple of integers or None entries, where None indicates that any positive integer may be expected). In input_shape , the batch dimension is not included .

Therefore, the right input shape is:

input_shape = (timesteps, rows, columns, channels)

After fixing this error you will encounter the next error (it is not related to input_shape ):

ValueError: Input 0 is incompatible with layer conv_lst_m2d_2: expected ndim=5, found ndim=4

This error occurs when you try to add the second ConvLSTM2D layer. This happens because the output of the first ConvLSTM2D layer is a 4D tensor with shape (samples, output_row, output_col, filters) . You might want to set return_sequences=True , in which case the output is a 5D tensor with shape (samples, time, output_row, output_col, filters) .

After fixing this error, you will encounter a new error happening in the following lines:

model.add(Flatten())
model.add(LSTM(10))

It does not make sense to have a Flatten layer right before an LSTM layer. This will never work as the LSTM requires a 3D input tensor with shape (samples, time, input_dim) .

To sum up, I highly recommend you to take a close look at the Keras documentation, in particular for the LSTM and ConvLSTM2D layers. It is also important to understand how these layers work to make a good use of them.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM