简体   繁体   中英

Keras model.fit ValueError: Input arrays should have the same number of samples as target arrays

I'm trying to load the bottleneck_features that I obtained from running resnet50 into a top layer model. I ran predict_generator on resnet and saved the resultant bottleneck_features to a npy file. I am unable to fit the model I have created because of the following error:

    Traceback (most recent call last):
  File "Labeled_Image_Recognition.py", line 119, in <module>
    callbacks=[checkpointer])
  File "/home/dillon/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/models.py", line 963, in fit
    validation_steps=validation_steps)
  File "/home/dillon/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/training.py", line 1630, in fit
    batch_size=batch_size)
  File "/home/dillon/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/training.py", line 1490, in _standardize_user_data
    _check_array_lengths(x, y, sample_weights)
  File "/home/dillon/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/training.py", line 220, in _check_array_lengths
    'and ' + str(list(set_y)[0]) + ' target samples.')
ValueError: Input arrays should have the same number of samples as target arrays. Found 940286 input samples and 14951 target samples.

I'm not really sure what it means. I have 940286 total images in my train dir and there are 14951 total subdirs that these images are separated into. My two hypotheses are:

  1. It is possible that I am not formatting the train_data and train_labels correctly.
  2. I set up the model incorrectly

Any guidance into the right direction would be much appreciated!

Here is the code:

# Constants
num_train_dirs = 14951 #This is the total amount of classes I have
num_valid_dirs = 13168 

def load_labels(path):
    targets = os.listdir(path)
    labels = np_utils.to_categorical(targets, len(targets))
    return labels

def create_model(train_data):
    model = Sequential()
    model.add(Flatten(input_shape=train_data.shape[1:]))
    model.add(Dense(num_train_dirs, activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(num_train_dirs, activation='softmax'))
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model    

train_data = np.load(open('bottleneck_features/bottleneck_features_train.npy', 'rb'))
train_labels = load_labels(raid_train_dir)

valid_data = np.load(open('bottleneck_features/bottleneck_features_valid.npy', 'rb'))
valid_labels = train_labels

model = create_model(train_data)
model.summary()

checkpointer = ModelCheckpoint(filepath='weights/first_try.hdf5', verbose=1, save_best_only=True)

print("Fitting model...")

model.fit(train_data, train_labels,
     epochs=50,
     batch_size=100,
     verbose=1,
     validation_data=(valid_data, valid_labels),
     callbacks=[checkpointer])

In case of supervised learning the number of input samples ( X ) must match the number of output (labels) samples ( Y ).

For example: if we want to fit (learn) a NN to recognize handwritten digits and we feed 10.000 images ( X ) to our model, then we should also pass 10.000 labels ( Y ).

In your case those numbers don't match.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM