简体   繁体   中英

Input and output shapes in keras ANN

I am trying to implement an ANN using keras for multiclass classification task.

This is my dataset:

#features shape (9498, 17)
#labels shape (9498,) 

where 9498 is the number of pixels and 17 is the number of timestamps, and I have 24 classes that I want to predict.

I wanted to start with something very basic. This is the code I used:

import keras
from keras.models import Sequential
from keras.layers import Dense
# Loading the data
X_train, X_test, y_train, y_test = train_test_split(NDVI, labels, test_size=0.15, random_state=42)

# Building the model
model = Sequential([
  Dense(128, activation='relu', input_shape=(17,),name="layer1"),
  Dense(64, activation='relu', name="layer2"),
  Dense(24, activation='softmax', name="layer3"),
])
print(model.summary())


# Compiling the model
model.compile(
  optimizer='adam',                              # gradient-based optimizer
  loss='categorical_crossentropy',               # (>2 classes)
  metrics=['accuracy'],
)

# Training the model
model.fit(
  X_train, # training data
  y_train, # training targets
  epochs=5,
  batch_size=32,
)

Which results in the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-17-2f4cf6510b24> in <module>()
     23   y_train, # training targets
     24   epochs=5,
---> 25   batch_size=32,
     26 )

2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1152             sample_weight=sample_weight,
   1153             class_weight=class_weight,
-> 1154             batch_size=batch_size)
   1155 
   1156         # Prepare validation data.

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    619                 feed_output_shapes,
    620                 check_batch_axis=False,  # Don't enforce the batch size.
--> 621                 exception_prefix='target')
    622 
    623             # Generate sample-wise weight values given the `sample_weight` and

/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    143                             ': expected ' + names[i] + ' to have shape ' +
    144                             str(shape) + ' but got array with shape ' +
--> 145                             str(data_shape))
    146     return data
    147 

ValueError: Error when checking target: expected layer3 to have shape (24,) but got array with shape (1,)

I don't know why this error pops. Also, I don't seem to understand the input and output shapes in keras even though I checked other similar posts that tackle the same topic.

This error is coming because in your labels shape, it is (9498,1) but it should be (9498,24) ie, the number of output classes in which you want to classify. In your training data, you have to train your model on 24 classes to get output of 24 classes.

The main issue is the shape of the labels and because you are using loss='categorical_crossentropy' it naturally expects to have one-hot encoded labels where the correct class has 1 and the incorrect ones are 0 . Since you have 24 classes, the expected one-hot encoded label set should be 9498 x 24 . So transform your labels using

from keras.utils import to_categorical
labels = to_categorical(labels)

Your code then becomes:

import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical


labels = to_categorical(labels)

# Splitting the data to training and testing sets
X_train, X_test, y_train, y_test = train_test_split(NDVI, labels, test_size=0.15, random_state=42)

# Building the model
model = Sequential([
  Dense(128, activation='relu', input_shape=(17,),name="layer1"),
  Dense(64, activation='relu', name="layer2"),
  Dense(24, activation='softmax', name="layer3"),
])
print(model.summary())


# Compiling the model
model.compile(
  optimizer='adam',                              # gradient-based optimizer
  loss='categorical_crossentropy',               # (>2 classes)
  metrics=['accuracy'],
)

# Training the model
model.fit(
  X_train, # training data
  y_train, # training targets
  epochs=5,
  batch_size=32,
)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM