簡體   English   中英

ValueError:檢查輸入時出錯:預期 input_13 有 3 個維度,但得到了形狀為 (50000, 32, 32, 3) 的數組

[英]ValueError: Error when checking input: expected input_13 to have 3 dimensions, but got array with shape (50000, 32, 32, 3)

我在訓練變分自動編碼器時遇到尺寸錯誤,我無法弄清楚我做錯了什么。 我在僅使用Dense的神經網絡中存在維度錯誤,但我通過添加Flatten層解決了它。 這個錯誤不能這樣解決。

我使用的數據集是 CIFAR10。

這是我的代碼及其輸出:

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

class Sampling(keras.layers.Layer):
    def call(self, inputs):
        mean, log_var = inputs
        return K.random_normal(tf.shape(log_var)) * K.exp(log_var / 2) + mean 

tf.random.set_seed(42)
np.random.seed(42)

codings_size = 10

inputs = keras.layers.Input(shape=(32, 32))
z = keras.layers.Flatten()(inputs)
z = keras.layers.Dense(150, activation="selu")(z)
z = keras.layers.Dense(100, activation="selu")(z)
codings_mean = keras.layers.Dense(codings_size)(z)
codings_log_var = keras.layers.Dense(codings_size)(z)
codings = Sampling()([codings_mean, codings_log_var])
variational_encoder = keras.models.Model(
    inputs=[inputs], outputs=[codings_mean, codings_log_var, codings])

decoder_inputs = keras.layers.Input(shape=[codings_size])
x = keras.layers.Dense(100, activation="selu")(decoder_inputs)
x = keras.layers.Dense(150, activation="selu")(x)
x = keras.layers.Dense(28 * 28, activation="sigmoid")(x)
outputs = keras.layers.Reshape([28, 28])(x)
variational_decoder = keras.models.Model(inputs=[decoder_inputs], outputs=[outputs])

_, _, codings = variational_encoder(inputs)
reconstructions = variational_decoder(codings)
variational_ae = keras.models.Model(inputs=[inputs], outputs=[reconstructions])

variational_ae.summary()

latent_loss = -0.5 * K.sum(
    1 + codings_log_var - K.exp(codings_log_var) - K.square(codings_mean),
    axis=-1)
variational_ae.add_loss(K.mean(latent_loss) / 784.)
variational_ae.compile(loss="binary_crossentropy", optimizer="rmsprop", metrics=[rounded_accuracy])
history = variational_ae.fit(x_train, x_train, epochs=25, batch_size=128,
                             validation_data=(x_test, x_test))

Model 總結:

Model: "model_20"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_13 (InputLayer)        [(None, 32, 32)]          0         
_________________________________________________________________
model_18 (Model)             [(None, 10), (None, 10),  170870    
_________________________________________________________________
model_19 (Model)             (None, 28, 28)            134634    
=================================================================
Total params: 305,504
Trainable params: 305,504
Non-trainable params: 0
_________________________________________________________________

追溯:

ValueError                                Traceback (most recent call last)
<ipython-input-163-310c1702abb5> in <module>
     33 variational_ae.compile(loss="binary_crossentropy", optimizer="rmsprop", metrics=[rounded_accuracy])
     34 history = variational_ae.fit(x_train, x_train, epochs=25, batch_size=128,
---> 35                              validation_data=(x_test, x_test))

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817         max_queue_size=max_queue_size,
    818         workers=workers,
--> 819         use_multiprocessing=use_multiprocessing)
    820 
    821   def evaluate(self,

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    233           max_queue_size=max_queue_size,
    234           workers=workers,
--> 235           use_multiprocessing=use_multiprocessing)
    236 
    237       total_samples = _get_total_number_of_samples(training_data_adapter)

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    591         max_queue_size=max_queue_size,
    592         workers=workers,
--> 593         use_multiprocessing=use_multiprocessing)
    594     val_adapter = None
    595     if validation_data:

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    644     standardize_function = None
    645     x, y, sample_weights = standardize(
--> 646         x, y, sample_weight=sample_weights)
    647   elif adapter_cls is data_adapter.ListsOfScalarsDataAdapter:
    648     standardize_function = standardize

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
   2381         is_dataset=is_dataset,
   2382         class_weight=class_weight,
-> 2383         batch_size=batch_size)
   2384 
   2385   def _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs,

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, is_dataset, class_weight, batch_size)
   2408           feed_input_shapes,
   2409           check_batch_axis=False,  # Don't enforce the batch size.
-> 2410           exception_prefix='input')
   2411 
   2412     # Get typespecs for the input data and sanitize it if necessary.

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    571                            ': expected ' + names[i] + ' to have ' +
    572                            str(len(shape)) + ' dimensions, but got array '
--> 573                            'with shape ' + str(data_shape))
    574         if not check_batch_axis:
    575           data_shape = data_shape[1:]

ValueError: Error when checking input: expected input_13 to have 3 dimensions, but got array with shape (50000, 32, 32, 3)

我嘗試放置shape=(32, 32, 3) ,但這會導致以下錯誤:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-166-45b654f7e264> in <module>
     33 variational_ae.compile(loss="binary_crossentropy", optimizer="rmsprop", metrics=[rounded_accuracy])
     34 history = variational_ae.fit(x_train, x_train, epochs=25, batch_size=128,
---> 35                              validation_data=(x_test, x_test))

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817         max_queue_size=max_queue_size,
    818         workers=workers,
--> 819         use_multiprocessing=use_multiprocessing)
    820 
    821   def evaluate(self,

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    233           max_queue_size=max_queue_size,
    234           workers=workers,
--> 235           use_multiprocessing=use_multiprocessing)
    236 
    237       total_samples = _get_total_number_of_samples(training_data_adapter)

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    591         max_queue_size=max_queue_size,
    592         workers=workers,
--> 593         use_multiprocessing=use_multiprocessing)
    594     val_adapter = None
    595     if validation_data:

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    644     standardize_function = None
    645     x, y, sample_weights = standardize(
--> 646         x, y, sample_weight=sample_weights)
    647   elif adapter_cls is data_adapter.ListsOfScalarsDataAdapter:
    648     standardize_function = standardize

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
   2381         is_dataset=is_dataset,
   2382         class_weight=class_weight,
-> 2383         batch_size=batch_size)
   2384 
   2385   def _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs,

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, is_dataset, class_weight, batch_size)
   2487           # Additional checks to avoid users mistakenly using improper loss fns.
   2488           training_utils.check_loss_and_target_compatibility(
-> 2489               y, self._feed_loss_fns, feed_output_shapes)
   2490 
   2491       sample_weights, _, _ = training_utils.handle_partial_sample_weights(

~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_loss_and_target_compatibility(targets, loss_fns, output_shapes)
    808           raise ValueError('A target array with shape ' + str(y.shape) +
    809                            ' was passed for an output of shape ' + str(shape) +
--> 810                            ' while using as loss `' + loss_name + '`. '
    811                            'This loss expects targets to have the same shape '
    812                            'as the output.')

ValueError: A target array with shape (50000, 32, 32, 3) was passed for an output of shape (None, 28, 28) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output.

我還嘗試刪除Input層(因此第一層是Flatten層),但我仍然必須提供輸入形狀,否則我會收到錯誤消息,抱怨“Flatten”object 沒有屬性“shape”。 當我確實提供輸入形狀( input_shape=(32, 32) )時,我得到了同樣的錯誤。

有人可以告訴我這里出了什么問題,我該如何解決?

您的代碼中至少存在兩個問題:

  • 您的輸入數據與網絡的輸入不匹配。 (32,32,3)(32,32) 一種可能的解決方法是以灰度加載圖像以匹配您的網絡輸入,或者讓您的網絡接受具有 3 個通道的圖像。
  • 您的基本事實(或標簽)與網絡的 output 不匹配。 (32,32)(28,28) 您需要重新設計解碼器部分,使其 output 成為與輸入形狀相同的矩陣(在您的示例中為(32,32)矩陣)。

要將數組轉換為灰度,可以使用tf.image.rgb_to_grayscale ,並使用tf.squeeze去除最后一個維度:

x_train_grayscale = tf.squeeze(tf.image.rgb_to_grayscale(x_train),axis=-1)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM