繁体   English   中英

ValueError: 层序列 9 的输入 0 与层不兼容:预期 min_ndim=4,发现 ndim=3。 收到的完整形状:[无,无,无]

[英]ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

我正在尝试解决分类问题。 我不知道为什么会出现此错误:

ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

这是主要代码:

model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here
def createModel():
    input_shape=(1,11, 3840)
    model = Sequential()
    #C1
    model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
    model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),data_format= "channels_first",  padding='same'))
    model.add(BatchNormalization())
    #C2
    model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    #c3
    model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))
    opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
    
    return model

错误:

    history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
    return self.fit(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 696, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
        return step_function(self, iterator)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
        return fn(*args, **kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
        outputs = model.train_step(data)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:747 train_step
        y_pred = self(x, training=True)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs,
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:191 assert_input_compatibility
        raise ValueError('Input ' + str(input_index) + ' of layer ' +

    ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

Keras 实际上总是隐藏第0-th维,也称为batch维。 在您放置input_shape = (A, B, C)实际上不应在此处提及批次尺寸, (A, B, C)应该是一个对象的形状(或在您的情况下是图像)。 例如,如果你说input_shape = (1,11, 3840)那么它实际上意味着用于训练或预测的数据应该是一个类似(7, 1,11, 3840)的形状的 numpy 数组,即训练中有7对象批。 所以这7是批次的大小,要并行训练的对象数量。

因此,如果您的一个对象(例如图像)的形状为(11, 3840)那么您必须到处写input_shape = (11, 3840) ,而不必提及批量大小。

为什么 Keras 隐藏第0-th批次维度? 因为 keras 期望不同大小的批次,今天你可以提供 7 个对象,明天 9 个,并且相同的网络将适用于两者。 但是(11, 3840)的一个对象的形状永远不应该改变,并且为由函数generate_arrays_for_training()生成的训练提供的数据应该总是大小(BatchSize, 11, 3840) ,其中BatchSize可以变化,您可以生成一批179对象图像,每个对象的形状为(11, 3840)

如果所有层的图像应该是 3 维的,有 1 个通道,那么您必须扩展生成的训练数据的X = np.expand_dims(X, 0)使用此函数执行X = np.expand_dims(X, 0)以便您的训练 X 数据具有形状(1, 1, 11, 3840) ,例如批处理 1 个对象,只有这样你才能有input_shape = (1, 11, 3840)

另外我看到你到处写data_format= "channels_first" ,默认情况下所有函数都是channels_last ,为了不在任何地方写这个,你可以重塑由generate_arrays_for_training()数据generate_arrays_for_training()数据,如果它是X形状(1, 1, 11, 3840)然后你做X = X.transpose(0, 2, 3, 1) 而你的渠道将成为最后的维度。

移调将一个维度移动到另一个位置。 但是,对于你的情况,你刚才1路比,而不是换位你可以重塑例如X形的(1, 1, 11, 3840)可通过重塑X = X.reshape(1, 11, 3840, 1)和它将变成(1, 11, 3840, 1)的形状。 仅当您不想在任何地方编写"channels_first"时才需要这样做,但是如果您不想美化代码,则根本不需要转置/重塑!

我记得我过去的 Keras 以某种方式不喜欢大小为 1 的维度,它基本上尝试在几个不同的函数中删除它们,即如果 keras 看到形状数组(1, 2, 1, 3, 1, 4)它几乎总是试图将其重塑为(2, 3, 4) 因此np.expand_dims()实际上被忽略了。 在这种情况下,唯一的解决方案可能是至少生成一批大小为 2 的图像。

您也可以阅读我的长篇回答,虽然它有点无关,但它可以帮助您了解 Keras 中的训练/预测是如何工作的,尤其是您可以阅读编号为1-12最后一段。

更新:由于下一次修改的帮助,问题似乎已解决:

  1. 在数据生成函数中,需要进行两次扩展X = np.expand_dims(np.expand_dims(X, 0), 0) ,即X = np.expand_dims(np.expand_dims(X, 0), 0)

  2. 在数据生成函数中需要另一个X = X.transpose(0, 2, 3, 1)

  3. 在网络输入形状的代码中设置为input_shape = (11, 3840, 1)

  4. 在网络代码中,所有子字符串data_format = "channels_first"都被删除了。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM