簡體   English   中英

如何將LSTM與CNN一起使用

[英]How to apply LSTM with CNN

我的輸入的形狀為(1,12000,250,150,3),標簽為CNN(1,12000,2)。 換句話說,我正在訓練250x150x3圖像上2類的CNN; [1,0]或[0,1]。

最終,這將創建一個機器人來玩飄揚的小鳥。 有人告訴我,添加LSTM來同時對幾個幀進行分類是可行的方法。 到目前為止,我使用以下純CNN架構達到了0.984 val_acc。

model.add(Conv2D(32, 3, 3, border_mode='same', input_shape=(250,150,3), activation='relu'))
model.add(Conv2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

#model.add(LSTM(100, input_shape=(32, 32, 19), return_sequences=True))
model.add(Dense(2))
model.add(Activation('sigmoid'))
model.summary()

精度:

Epoch 15/100
12800/12800 [==============================] - 89s 7ms/step - loss: 0.0390 - acc: 0.9889 - val_loss: 0.1422 - val_acc: 0.9717
Epoch 16/100
12800/12800 [==============================] - 89s 7ms/step - loss: 0.0395 - acc: 0.9883 - val_loss: 0.0917 - val_acc: 0.9821ss: - ETA: 1s - loss: 0.0399 - acc:
Epoch 17/100
12800/12800 [==============================] - 89s 7ms/step - loss: 0.0357 - acc: 0.9902 - val_loss: 0.1383 - val_acc: 0.9816
Epoch 18/100
12800/12800 [==============================] - 89s 7ms/step - loss: 0.0452 - acc: 0.9871 - val_loss: 0.1153 - val_acc: 0.9750
Epoch 19/100
12800/12800 [==============================] - 90s 7ms/step - loss: 0.0417 - acc: 0.9892 - val_loss: 0.1641 - val_acc: 0.9668
Epoch 20/100
12800/12800 [==============================] - 90s 7ms/step - loss: 0.0339 - acc: 0.9904 - val_loss: 0.0927 - val_acc: 0.9840

我試過添加一個LSTM層,但是我不確定出什么問題了:

ValueError                                Traceback (most recent call last)
<ipython-input-6-59e402ac3b8a> in <module>
     26 model.add(Dropout(0.5))
     27 
---> 28 model.add(LSTM(100, input_shape=(32, 19), return_sequences=True))
     29 
     30 model.add(Dense(2))

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\engine\sequential.py in add(self, layer)
    179                 self.inputs = network.get_source_inputs(self.outputs[0])
    180         elif self.outputs:
--> 181             output_tensor = layer(self.outputs[0])
    182             if isinstance(output_tensor, list):
    183                 raise TypeError('All layers in a Sequential model '

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\layers\recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
    530 
    531         if initial_state is None and constants is None:
--> 532             return super(RNN, self).__call__(inputs, **kwargs)
    533 
    534         # If any of `initial_state` or `constants` are specified and are Keras

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
    412                 # Raise exceptions in case the input is not compatible
    413                 # with the input_spec specified in the layer constructor.
--> 414                 self.assert_input_compatibility(inputs)
    415 
    416                 # Collect input shapes to build layer.

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs)
    309                                      self.name + ': expected ndim=' +
    310                                      str(spec.ndim) + ', found ndim=' +
--> 311                                      str(K.ndim(x)))
    312             if spec.max_ndim is not None:
    313                 ndim = K.ndim(x)

ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2

Keras文檔說,LSTM的參數是(單位,輸入形狀)等等。 我還讀到某個地方不再需要TimeDistributed(),所以沒有包含它。 我在為LSTM計算輸入形狀時是否犯了一個錯誤,還是我完全錯過了其他東西?


編輯1:我刪除了flatten()層,並將LSTM層移到了conv層之后,fc層之前。 我還添加了reshape(),以便將第4個conv層的4個dim輸出重塑為3個dim,然后可以將其輸入到LSTM層。

model.add(Conv2D(32, 3, 3, border_mode='same', input_shape=(250,150,3), activation='relu'))
model.add(Conv2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
output_1 = model.output_shape

model.add(Conv2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
output_2 = model.output_shape

model.add(Conv2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
output_3 = model.output_shape

model.add(Conv2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
output_4 = model.output_shape

model.add(Reshape((15, 9)))
output_5 = model.output_shape
model.add(LSTM(100, input_shape=(15, 9, 256), return_sequences=True))

這些是每個輸出的形狀:

Conv_1: (None, 125, 75, 32)
Conv_2: (None, 62, 37, 64)
Conv_3: (None, 31, 18, 128)
Conv_4: (None, 15, 9, 256)

當我嘗試重塑conv_4以使LSTM獲得3個暗淡的輸入時,會發生以下情況:

ValueError                                Traceback (most recent call last)
<ipython-input-21-7f5240e41ae4> in <module>
     22 output_4 = model.output_shape
     23 
---> 24 model.add(Reshape((15, 9)))
     25 output_5 = model.output_shape
     26 model.add(LSTM(100, input_shape=(15, 9, 256), return_sequences=True))

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\engine\sequential.py in add(self, layer)
    179                 self.inputs = network.get_source_inputs(self.outputs[0])
    180         elif self.outputs:
--> 181             output_tensor = layer(self.outputs[0])
    182             if isinstance(output_tensor, list):
    183                 raise TypeError('All layers in a Sequential model '

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
    472             if all([s is not None
    473                     for s in to_list(input_shape)]):
--> 474                 output_shape = self.compute_output_shape(input_shape)
    475             else:
    476                 if isinstance(input_shape, list):

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\layers\core.py in compute_output_shape(self, input_shape)
    396             # input shape known? then we can compute the output shape
    397             return (input_shape[0],) + self._fix_unknown_dimension(
--> 398                 input_shape[1:], self.target_shape)
    399 
    400     def call(self, inputs):

E:\Applications\Anaconda3\envs\pygpu\lib\site-packages\keras\layers\core.py in _fix_unknown_dimension(self, input_shape, output_shape)
    384             output_shape[unknown] = original // known
    385         elif original != known:
--> 386             raise ValueError(msg)
    387 
    388         return tuple(output_shape)

ValueError: total size of new array must be unchanged

任何幫助是極大的贊賞。


首先,我在您的模型中沒有看到lstm,它只是4轉3完全連接對不對? 為什么您一個接一個地擁有2個Conv2D?

我會在框架上使用LSTM,而不是在展平后立即進行第一次完全連接。

我在Keras中不知道,但是在任何RNN單元中輸入的都是3D數組,例如:(批大小,最大序列,項目)或(max_sequence,bach_size,項目),第二種格式有點奇怪。

您得到的錯誤是: expected ndim=3, found ndim=2

所以我想你輸入2D數組而不是3D

您可以修改拼合以創建有效的3D輸入。 例如,您可以通過輸入5d但使用2d convo來執行此操作,例如:bach大小= 100,幀= 3,通道= 3,項= 28,28(高度,寬度),展平為(100,3,-1),其中-1代表休息。

我需要自己嘗試類似的東西,但我在pytorch中做...

如果向下滾動一點,我會在文檔中找到ConvLSTM2D,這應該可以解決我的問題。 現在將嘗試

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM