[英]Recurrent Neural network
有人可以向我解释 layer_size 超参数在这个循环神经网络 model 中的作用吗?
###RNN MODEL TESTING BINARY CLASSIFICATION MODEL
batch_size = 32
epochs = 10
layer_size = 256
drop_out = 0.001
if True:
model = Sequential()
model.add(LSTM(layer_size,input_shape =(30, 1), return_sequences=True ))
model.add(Dropout(drop_out))
model.add(LSTM(layer_size*2,return_sequences=True))
model.add(Dropout(drop_out))
model.add(LSTM(layer_size,return_sequences=False))
model.add(Dropout(drop_out))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam() , metrics=['accuracy'])
model.summary()
fit=model.fit(X_train_kb2_keras, y_train2_kb2_keras, batch_size=batch_size, epochs=epochs, validation_split=0.20)
y_pred = model.predict_classes(X_test_keras)
print("Accuracy",accuracy_score(y_test2_kb2_keras,y_pred ))
print("precision_score",precision_score(y_test2_kb2_keras,y_pred ))
print("recall",recall_score(y_test2_kb2_keras,y_pred ))
根据文档,它是 output 的维度。 https://keras.io/api/layers/recurrent_layers/lstm/ ( layer_size
对应文档中的unit
)
所以我相信 LSTM function 的 output 很可能应该包含 256 个元素。 验证这一点的一种方法是取其中的一部分并执行并找出它的形状
temp = LSTM(layer_size,input_shape =(30, 1), return_sequences=True )
print(temp.shape)
print(temp)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.