简体   繁体   English

将RNN和CNN与烤宽面条结合

[英]combining RNN and CNN with lasagne

I am trying to run a 1D CNN on 10s segments of EEG data and then cover a temporal connection between the segments using an RNN. 我试图在10s的EEG数据段上运行一维CNN,然后使用RNN覆盖各段之间的时间连接。

The problem is, that the RNN expects input with batch_size x sequence_length x num_inputs while the CNN outputs batch_size x num_filters x sequence_length 问题是,RNN希望输入的内容为batch_size x sequence_length x num_inputs而CNN输出的是batch_size x num_filters x sequence_length

This can be solved by a dim-shuffle layer 这可以通过暗淡改组来解决

network = L.InputLayer(shape=(None, data_size[1], data_size[2]), input_var=input_var)
network = L.Conv1DLayer( network, num_filters=32, filter_size = 5) 
network = L.DimshuffleLayer(network, (0, 2, 1))
network = L.LSTMLayer(network, 200)

But to my understanding the RNN will now cover temporal connections only within the sequence_length, but not between the different batches, is that right? 但是据我了解,RNN现在将仅覆盖sequence_length 的时间连接,而不覆盖不同批次之间的时间连接,对吗?

How can I get the temporal connection between segments? 如何获得片段之间的时间联系?

Answering my own question: 回答我自己的问题:

The RNN will indeed only learn dependencies within one batch. RNN实际上只会在一批内学习依赖项。 However, Keras has a mode that allows for states to transition between batches: stateful=True 但是,Keras具有允许状态在批之间转换的模式: stateful=True

network = keras.layers.LSTM(network, stateful=True)

Now it is important to feed the batches in the right order: The i-th element of each batch will be learned with the state of the i-th batch at time t-1. 现在,以正确的顺序喂入批次很重要:每个批次的第i个元素将在时间t-1处获得第i个批次的状态。 That means you need to be very careful when feeding your batches. 这意味着在分批喂食时需要非常小心。

Note, this will only transition cell states and not backpropagate between batches. 请注意,这只会转换单元状态,而不会在批次之间反向传播。 As a side-effect your initial state while predicting will have to be set and bias your results. 作为副作用,必须设置预测时的初始状态并使结果有偏差。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM