简体   繁体   中英

Does a keras model affect the size of input data?

I mean, if a convnet model is fed in n models, it will give out n outputs right? However when I tried this with a bottleneck model (using VGG16 convnet to build on top of), the VGG16 convnet returned 16 less outputs than the number of inputs.

This is the console output:

import numpy as np

train_data = np.load(open('bottleneck_features_train.npy'))
train_data.shape
(8384, 7, 7, 512)

validation_data = np.load(open('bottleneck_features_validation.npy')) validation_data.shape
(3584, 7, 7, 512)

The script which generated this output can be found here .

The stack trace for the above script.

Using Theano backend.
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
Found 8400 images belonging to 120 classes . Saving train features...
Found 3600 images belonging to 120 classes . Saving test features...
Training top layers...
Compiling bottleneck model...
Training bottleneck model...
Traceback (most recent call last):

File"pretrained_network.py", line 87, in
train_top_model()

File "pretrained_network.py", line 82, in train_top_model
validation_data=(validation_data, validation_labels))

File "/home/ashish/ml-projects/venv/local/lib/python2.7/site-packages/keras/models.py",line 845, in fit initial_epoch=initial_epoch)

File "/home/ashish/ml-projects/venv/local/lib/python2.7/site-packages/keras/engine/training.py", line 1405, in fit batch_size=batch_size)

File "/home/ashish/ml-projects/venv/local/lib/python2.7/site-packages/keras/engine/training.py", line 1307, in _standardize_user_data _check_array_lengths(x, y, sample_weights)

File "/home/ashish/ml-projects/venv/local/lib/python2.7/site-packages/keras/engine/training.py", line 229, in _check_array_lengths 'and ' + str(list(set_y)[0]) + ' target samples.')

ValueError: Input arrays should have the same number of samples as target arrays.
Found 8384 input samples and 8400 target samples .

The problem lies eg here in your script:

bottleneck_features_train = model.predict_generator(
        generator, nb_train_samples // batch_size)

It should be changed to:

bottleneck_features_train = model.predict_generator(
        generator, (nb_train_samples // batch_size) + 1)

Without this a generator is not called enough number of times.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM