簡體   English   中英

為什么 CNN-LSTM 比 LSTM 快?

[英]Why CNN-LSTM is faster than LSTM?

我對加速的原因感到困惑。 訓練和預測速度的提升是巨大的,超過 50 倍。

這就是我創建 LSTM model 的方式:

def create_model(learning_rate, num_LSTM_layers,
                 num_LSTM_nodes, dropout_rate):


    #CREATE THE LSTM NEURAL NETWORK
    model = Sequential()
    if num_LSTM_layers > 1:
        model.add(LSTM(num_LSTM_nodes, return_sequences=True ))
    if num_LSTM_layers == 1:
        model.add(LSTM(num_LSTM_nodes, return_sequences=False))
    model.add(Activation('relu'))
    model.add(Dropout(dropout_rate))

    if num_LSTM_layers > 1:
        for i in range(num_LSTM_layers-1):
            if i+1 == num_LSTM_layers-1:
                model.add(layers.LSTM(num_LSTM_nodes, return_sequences=False))
            else:
                model.add(layers.LSTM(num_LSTM_nodes, return_sequences=True))
            model.add(Activation('relu'))
            model.add(Dropout(dropout_rate))

    model.add(Dense(1))
    model.add(Activation('linear'))


    # Use the Adam method for training the network.
    # We want to find the best learning-rate for the Adam method.
    optimizer = Adam(lr=learning_rate)

    # In Keras we need to compile the model so it can be trained.
    model.compile(loss='mean_squared_error', optimizer=optimizer)

    return model

這就是我創建 CNN-LSTM model 的方式:

def create_model_TD(learning_rate, num_conv_layers, num_LSTM_layers,
                 num_LSTM_nodes, dropout_rate, filter_size, kernel_height, pool_size):

    #CREATE THE LSTM NEURAL NETWORK
    model = Sequential()
    model.add(TimeDistributed(Conv1D(input_shape=(None, X_train.shape[2], X_train.shape[3]) , 
                                     filters= int(filter_size), kernel_size= int(kernel_height), activation='relu', padding='causal')))
    if num_conv_layers == 2:
        model.add(TimeDistributed(Conv1D(filters=int(filter_size), kernel_size= int(kernel_height), activation='relu', padding='causal')))
    model.add(TimeDistributed(MaxPooling1D(pool_size=int(pool_size))))
    model.add(TimeDistributed(Flatten()))
    if num_LSTM_layers > 1:
        model.add(LSTM(num_LSTM_nodes, return_sequences=True))
    if num_LSTM_layers == 1:
        model.add(LSTM(num_LSTM_nodes, return_sequences=False))
    model.add(Activation('relu'))
    model.add(Dropout(dropout_rate))

    if num_LSTM_layers > 1:
        for i in range(num_LSTM_layers-1):
            if i+1 == num_LSTM_layers-1:
                model.add(LSTM(num_LSTM_nodes, return_sequences=False))
            else:
                model.add(LSTM(num_LSTM_nodes, return_sequences=True))
            model.add(Activation('relu'))
            model.add(Dropout(dropout_rate))

    model.add(Dense(1))
    model.add(Activation('linear'))


    # Use the Adam method for training the network.
    # We want to find the best learning-rate for the Adam method.
    optimizer = Adam(lr=learning_rate)

    # In Keras we need to compile the model so it can be trained.
    model.compile(loss='mean_squared_error', optimizer=optimizer)

    return model

但是當我查看可訓練參數的數量時,CNN-LSTM 的參數似乎比經典 LSTM 還要多。 有人知道原因嗎? 我會很感激你的幫助,謝謝。

CNN 通過專注於關鍵特征來降低各種不同的復雜性。 卷積層的使用導致張量的大小減小。 此外,池化的使用導致進一步減少。 最后但並非最不重要的一點是,ReLu 層降低了復雜性。 因為訓練時間減少

在 cnn 中更深入的初學者看

https://www.semanticscholar.org/paper/Introduction-to-Convolutional-Neural-Networks-Wu/450ca19932fcef1ca6d0442cbf52fec38fb9d1e5

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM