簡體   English   中英

ValueError: 層序列 9 的輸入 0 與層不兼容:預期 min_ndim=4,發現 ndim=3。 收到的完整形狀:[無,無,無]

[英]ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

我正在嘗試解決分類問題。 我不知道為什么會出現此錯誤:

ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

這是主要代碼:

model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here
def createModel():
    input_shape=(1,11, 3840)
    model = Sequential()
    #C1
    model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
    model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),data_format= "channels_first",  padding='same'))
    model.add(BatchNormalization())
    #C2
    model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    #c3
    model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))
    opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
    
    return model

錯誤:

    history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
    return self.fit(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 696, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
        return step_function(self, iterator)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
        return fn(*args, **kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
        outputs = model.train_step(data)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:747 train_step
        y_pred = self(x, training=True)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs,
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:191 assert_input_compatibility
        raise ValueError('Input ' + str(input_index) + ' of layer ' +

    ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

Keras 實際上總是隱藏第0-th維,也稱為batch維。 在您放置input_shape = (A, B, C)實際上不應在此處提及批次尺寸, (A, B, C)應該是一個對象的形狀(或在您的情況下是圖像)。 例如,如果你說input_shape = (1,11, 3840)那么它實際上意味着用於訓練或預測的數據應該是一個類似(7, 1,11, 3840)的形狀的 numpy 數組,即訓練中有7對象批。 所以這7是批次的大小,要並行訓練的對象數量。

因此,如果您的一個對象(例如圖像)的形狀為(11, 3840)那么您必須到處寫input_shape = (11, 3840) ,而不必提及批量大小。

為什么 Keras 隱藏第0-th批次維度? 因為 keras 期望不同大小的批次,今天你可以提供 7 個對象,明天 9 個,並且相同的網絡將適用於兩者。 但是(11, 3840)的一個對象的形狀永遠不應該改變,並且為由函數generate_arrays_for_training()生成的訓練提供的數據應該總是大小(BatchSize, 11, 3840) ,其中BatchSize可以變化,您可以生成一批179對象圖像,每個對象的形狀為(11, 3840)

如果所有層的圖像應該是 3 維的,有 1 個通道,那么您必須擴展生成的訓練數據的X = np.expand_dims(X, 0)使用此函數執行X = np.expand_dims(X, 0)以便您的訓練 X 數據具有形狀(1, 1, 11, 3840) ,例如批處理 1 個對象,只有這樣你才能有input_shape = (1, 11, 3840)

另外我看到你到處寫data_format= "channels_first" ,默認情況下所有函數都是channels_last ,為了不在任何地方寫這個,你可以重塑由generate_arrays_for_training()數據generate_arrays_for_training()數據,如果它是X形狀(1, 1, 11, 3840)然后你做X = X.transpose(0, 2, 3, 1) 而你的渠道將成為最后的維度。

移調將一個維度移動到另一個位置。 但是,對於你的情況,你剛才1路比,而不是換位你可以重塑例如X形的(1, 1, 11, 3840)可通過重塑X = X.reshape(1, 11, 3840, 1)和它將變成(1, 11, 3840, 1)的形狀。 僅當您不想在任何地方編寫"channels_first"時才需要這樣做,但是如果您不想美化代碼,則根本不需要轉置/重塑!

我記得我過去的 Keras 以某種方式不喜歡大小為 1 的維度,它基本上嘗試在幾個不同的函數中刪除它們,即如果 keras 看到形狀數組(1, 2, 1, 3, 1, 4)它幾乎總是試圖將其重塑為(2, 3, 4) 因此np.expand_dims()實際上被忽略了。 在這種情況下,唯一的解決方案可能是至少生成一批大小為 2 的圖像。

您也可以閱讀我的長篇回答,雖然它有點無關,但它可以幫助您了解 Keras 中的訓練/預測是如何工作的,尤其是您可以閱讀編號為1-12最后一段。

更新:由於下一次修改的幫助,問題似乎已解決:

  1. 在數據生成函數中,需要進行兩次擴展X = np.expand_dims(np.expand_dims(X, 0), 0) ,即X = np.expand_dims(np.expand_dims(X, 0), 0)

  2. 在數據生成函數中需要另一個X = X.transpose(0, 2, 3, 1)

  3. 在網絡輸入形狀的代碼中設置為input_shape = (11, 3840, 1)

  4. 在網絡代碼中,所有子字符串data_format = "channels_first"都被刪除了。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM