[英]What is the proper way to add layers using Keras functional API?
I am trying to use Keras functional API to create a model with 2 branches but I need to add the output of the first branch (path23: m,n,5) with the second one (path10: m,n,1) and I need the output to be (output: m,n,1) and no (output: m,n,5) that is what I have now.我正在尝试使用 Keras 功能 API 创建具有 2 个分支的模型,但我需要将第一个分支(path23:m,n,5)的输出与第二个分支(path10:m,n,1)和我需要输出是(输出:m,n,1)而不是(输出:m,n,5),这就是我现在所拥有的。 I mean, I need to add the 5 tensors of the first branch with the tensor in the second branch without using broadcast.
我的意思是,我需要将第一个分支的 5 个张量与第二个分支中的张量相加,而不使用广播。 How can I do it?
我该怎么做?
Please check the code and the picture attached.请检查代码和所附图片。
def define_neural_network_model(input_shape, outputs = 1):
input_layer = Input(shape=(input_shape))
# first path
path10 = input_layer
# second path
path20 = input_layer
path21 = Dense(1, use_bias = True, kernel_initializer=initializer)(path20)
path22 = ReLU()(path21)
path23 = Conv1D(filters=5, kernel_size=3, strides=1, padding="same", use_bias = True, kernel_initializer=initializer)(path22)
# merge interpretation
output = Add()([path10, path23])
model = Model(inputs=input_layer, outputs=output)
model._name = 'Recovery'
return model
neural_network_model = define_neural_network_model(input_shape)
# model.summary()
plot_model(neural_network_model, to_file = 'generator_model.png', show_shapes = True, show_layer_names = True)
It has been a time since I did this question.自从我做这个问题以来已经有一段时间了。 I going to answer it because maybe it will be useful to someone else.
我要回答它,因为它可能对其他人有用。
A sum through one of the dimensions after the convolution tensor can be done using a Lambda layer.可以使用 Lambda 层对卷积张量之后的一个维度求和。 Then, reshaping is needed since one dimension will be lost during the sum.
然后,需要重新整形,因为在求和过程中会丢失一个维度。 The code is below, and the neural network diagram is attached ( 1 ).
代码如下,附上神经网络图( 1 )。
def define_neural_network_model(input_shape, outputs = 1):
input_layer = Input(shape=(input_shape))
# first path
path10 = input_layer
# second path
path20 = input_layer
path21 = Dense(1, use_bias = True, kernel_initializer=initializer)(path20)
path22 = ReLU()(path21)
path23 = Conv1D(filters=5, kernel_size=3, strides=1, padding="same", use_bias = True, kernel_initializer=initializer)(path22)
path24 = Lambda(lambda x: tf.keras.backend.expand_dims(backend.sum(x, axis=-1),axis=-1))(path23)
# merge interpretation
output = Add()([path10, path24])
model = Model(inputs=input_layer, outputs=output)
model._name = 'Recovery'
return model
neural_network_model = define_neural_network_model(input_shape)
neural_network_model.summary()
plot_model(neural_network_model, to_file = 'generator_model_corrected.png', show_shapes = True, show_layer_names = True)
Model: "Recovery"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_12 (InputLayer) [(None, 32767, 1)] 0 []
dense_13 (Dense) (None, 32767, 1) 2 ['input_12[0][0]']
re_lu_13 (ReLU) (None, 32767, 1) 0 ['dense_13[0][0]']
conv1d_13 (Conv1D) (None, 32767, 5) 20 ['re_lu_13[0][0]']
lambda_4 (Lambda) (None, 32767, 1) 0 ['conv1d_13[0][0]']
add_11 (Add) (None, 32767, 1) 0 ['input_12[0][0]',
'lambda_4[0][0]']
==================================================================================================
Total params: 22
Trainable params: 22
Non-trainable params: 0
__________________________________________________________________________________________________
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.