简体   繁体   中英

creating keras sequence for functional api model

Im creating a model that uses Keras's functional API, this model takes 2 inputs, hence im using

video_input = Input(shape=(16, 112, 112, 3))
image_input = Input(shape=(112, 112, 3))
Model(inputs=[video_input, image_input], outputs=merge_model)

So as you can see, this means that the model expects an array with the first element being of shape (16, 112, 112, 3) and second of shape (112, 112, 3).

I'm using a class that i created which inherits Keras.util.sequence class to provide generated batches of data. the problem comes after generating batches of data when tensorflow attempts to feed the model with the input the input is changed from being array of 2 to be array 1 and this 1 element consists of 2 for example it should expect [array(...), array(...)] instead it receives [array(array[...],array[...])]

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[array([[[[-76.87925 , -81.45539 , -82.91122 ],
         [-76.90526 , -81.45103 , -83.00473 ],
         [-76.77082 , -81.259674, -82.92529 ],
         ...,
         [-76.17821 , -80.61866 , -8...

i tried to make the data holder in the sequence generator as python array where i append data then convert it to numpy array but got the error above. somehow keras wraps it into 1 array before it returns it to the model.

this is the data generation method

def __data_generation(self, list_IDs_temp):
        'Generates data containing batch_size samples'  # X : (n_samples, *dim, n_channels)
        # Initialization
        X = []
        y = np.empty((self.batch_size), dtype=int)
        # Generate data
        for i, ID in enumerate(list_IDs_temp):
            # Store sample
            print(ID)
            frame_data = input_data.get_frames_data(
                self.work_directory + ID, self.num_of_frames, self.crop_size)
            image_index = random.randint(0, len(frame_data) - 1)
            im = frame_data[image_index]
            X.append([frame_data, im])

            # Store class
            y[i] = self.labels[ID]

        return np.array(X), keras.utils.to_categorical(
            y, num_classes=self.n_classes)

edited function that works

def __data_generation(self, list_IDs_temp):
        'Generates data containing batch_size samples'  # X : (n_samples, *dim, n_channels)
        # Initialization
        vX = np.empty((self.batch_size, *self.c3d_dim))
        iX = np.empty((self.batch_size, *self.static_dim))

        y = np.empty((self.batch_size), dtype=int)
        # Generate data
        for i, ID in enumerate(list_IDs_temp):
            # Store sample
            print(ID)
            frame_data = input_data.get_frames_data(
                self.work_directory + ID, self.num_of_frames, self.crop_size)
            image_index = random.randint(0, len(frame_data) - 1)
            im = frame_data[image_index]
            vX[i, ] = frame_data
            iX[i, ] = im
            # Store class
            y[i] = self.labels[ID]

        return vX, iX, keras.utils.to_categorical(
            y, num_classes=self.n_classes)

As I remember you should feed each input as independent array. For example you have 2 input images, you should not have array of type [[image_1, image_2], [image_3, image_4],[image_5, image_6] ..] but instead you should have something like [[image_1, image_3,image_5 ..], [image_2, image_4, image_6 ..]] as you see, first array is input for first image and second array is input for second image. This applies to your case as well. Just store inputs in different arrays and combine them when you apply fit. Should be something like [video_frames, images]

Hope it helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM