簡體   English   中英

如何在圖像序列上實現 Conv3D?

[英]How to implement Conv3D on sequence of images?

我應該在圖像序列上實現 Conv3D。 文件中有 72 張圖像,每張圖像的大小為 (16,16,3),其中 3 是通道。 以下是我的代碼:

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.layers import Conv3D, MaxPooling3D
from tensorflow.keras.utils import to_categorical

input_shape = (7,16,16,3) # 7 is the no. of images to be input to Conv3D at a time.
no_classes = 1
epoch = 7
verbosity = 1
learning_rate = 0.001

model = Sequential()
model.add(Conv3D(5,kernel_size=(3, 3, 3),padding ='same',activation='relu',kernel_initializer='he_uniform',input_shape=input_shape)) 
                 
model.add(MaxPooling3D(pool_size=(2, 2, 3), padding= 'same'))
model.add(Conv3D(3,kernel_size=(3, 3, 3),padding='same',activation='relu',kernel_initializer='he_uniform')) 
                 
model.add(MaxPooling3D(pool_size=(2, 2, 2), padding = 'same'))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(no_classes, activation='softmax'))

model.compile(loss=tf.keras.losses.categorical_crossentropy,
          optimizer=tf.keras.optimizers.Adam(lr=learning_rate),
          metrics=['accuracy'])

model.summary()

# Fit data to model
history = model.fit(trainX, new_trainy,
        batch_size = 25,
        epochs = 7)

我收到以下錯誤:

 Epoch 1/7
 ---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-47-92321d5fc5a8> in <module>()
     2 history = model.fit(trainX, new_trainy,
     3             batch_size = batch_size,
----> 4             epochs = epoch)

9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, 
batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, 
sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, 
validation_freq, max_queue_size, workers, use_multiprocessing)
   1098                 _r=1):
   1099               callbacks.on_train_batch_begin(step)
-> 1100               tmp_logs = self.train_function(iterator)
   1101               if data_handler.should_sync:
   1102                 context.async_wait()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, 
*args, **kwds)
    826     tracing_count = self.experimental_get_tracing_count()
    827     with trace.Trace(self._name) as tm:
--> 828       result = self._call(*args, **kwds)
    829       compiler = "xla" if self._experimental_compile else "nonXla"
    830       new_tracing_count = self.experimental_get_tracing_count()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, 
**kwds)
    869       # This is the first call of __call__, so we have to initialize.
    870       initializers = []
--> 871       self._initialize(args, kwds, add_initializers_to=initializers)
    872     finally:
    873       # At this point we know that the initialization is complete (or less

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, 
args, kwds, add_initializers_to)
    724     self._concrete_stateful_fn = (
    725         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: 
disable=protected-access
--> 726             *args, **kwds))
    727 
    728     def invalid_creator_scope(*unused_args, **unused_kwds):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in 
_get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2967       args, kwargs = None, None
   2968     with self._lock:
-> 2969       graph_function, _ = self._maybe_define_function(args, kwargs)
   2970     return graph_function
   2971 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in 
_maybe_define_function(self, args, kwargs)
   3359 
   3360           self._function_cache.missed.add(call_context_key)
-> 3361           graph_function = self._create_graph_function(args, kwargs)
   3362           self._function_cache.primary[cache_key] = graph_function
   3363 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in 
_create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3204             arg_names=arg_names,
   3205             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3206             capture_by_value=self._capture_by_value),
   3207         self._function_attributes,
   3208         function_spec=self.function_spec,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in 
func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, 
autograph_options, add_control_dependencies, arg_names, op_return_value, collections, 
capture_by_value, override_flat_arg_shapes)
    988         _, original_func = tf_decorator.unwrap(python_func)
    989 
--> 990       func_outputs = python_func(*func_args, **func_kwargs)
    991 
    992       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, 
**kwds)
    632             xla_context.Exit()
    633         else:
--> 634           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    635         return out
    636 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, 
**kwargs)
    975           except Exception as e:  # pylint:disable=broad-except
    976             if hasattr(e, "ag_error_metadata"):
--> 977               raise e.ag_error_metadata.to_exception(e)
    978             else:
    979               raise

ValueError: in user code:

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
    return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step  **
    outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:754 train_step
    y_pred = self(x, training=True)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:998 __call__
    input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/input_spec.py:239 assert_input_compatibility
    str(tuple(shape)))

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=5, found ndim=4. Full shape received: (None, 16, 16, 3)

trainX 的形狀為 (72,16,16,3),其中 72 是圖像的數量,(16,16) 是圖像的維度,3 是通道,即 rgb

任何人都可以幫助我擺脫這個錯誤。 任何幫助都將是非常可觀的。

我實際上得到了我犯錯誤的地方:

Conv3D 的每個輸入都必須是一個 4 維數據點。 而在我的例子中,數據集中的每個數據點,即 trainX 是 3 維的 shape(16,16,3)。 為了使每個數據點具有相同的形狀,即 4D(7,16,16,3),我從 trainX 中刪除了最后 2 張圖像,以便 trainX 中的圖像或數據點(即 70)的數量是 7 的倍數。接下來,我將 trainX 重塑為 5D 數據。

現在trainX中有10個數據點,其中每個數據點都是4D的(即(7,16,16,3))。

這消除了錯誤,我的 model 完美執行。 以下是代碼:

 new_trainX = trainX[:70,:,:,:]
 new_trainX2 = tf.reshape(new_trainX,shape = (10,7,16,16,3))
 
 # Fit data to model
        history = model.fit(new_trainX2, new_trainy,
                  batch_size = batch_size,
                  epochs = epoch)

   Here is the output:
   Epoch 1/7
   1/1 [==============================] - 1s 557ms/step - loss: 0.0000e+00 - accuracy: 1.0000
   Epoch 2/7
   1/1 [==============================] - 0s 129ms/step - loss: 0.0000e+00 - accuracy: 1.0000
   Epoch 3/7
   1/1 [==============================] - 0s 131ms/step - loss: 0.0000e+00 - accuracy: 1.0000
   Epoch 4/7
   1/1 [==============================] - 0s 129ms/step - loss: 0.0000e+00 - accuracy: 1.0000
   Epoch 5/7
   1/1 [==============================] - 0s 128ms/step - loss: 0.0000e+00 - accuracy: 1.0000
   Epoch 6/7
   1/1 [==============================] - 0s 140ms/step - loss: 0.0000e+00 - accuracy: 1.0000
   Epoch 7/7
   1/1 [==============================] - 0s 129ms/step - loss: 0.0000e+00 - accuracy: 1.0000     
 

希望這會有所幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM