简体   繁体   English

在Keras功能性API中链接图层的正确方法是什么?

[英]What is the correct way to chain layers in Keras functional api?

I am learning how to use Keras functional api, and my question is quite simple, but i have not been able to find any answer in the internet. 我正在学习如何使用Keras功能api,我的问题很简单,但是我无法在互联网上找到任何答案。 What is the correct way of naming chained layers in Keras? 在Keras中命名链接层的正确方法是什么? Should their names be same, or different? 它们的名称应该相同还是不同? Is there any convention or rule about it? 是否有任何约定或规则?

Let me show you two examples. 让我向您展示两个示例。 First one is directly from keras functional api guide 第一个直接来自keras功能api指南

x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)

The second example is my own: 第二个例子是我自己的:

second = Dense(64, activation='relu')(first)
third = Dense(64, activation='relu')(second)
fourth = Dense(64, activation='relu')(third)

I tried both methods and it gives me same performance for both options. 我尝试了两种方法,这两种方法都给我相同的性能。 Is there any functional difference between these two ways? 这两种方式在功能上有什么区别? If not, is there at least any 'formal convention'? 如果没有,那么至少有任何“正式约定”吗?

No there isn't. 不,没有。 Selecting the variable names is purely up to you. 选择变量名完全由您决定。 As far as the computation graph (your network) is concerned, both construct the same model. 计算图 (您的网络)而言,两者都构建相同的模型。

The only reason you might use different variable names is to refer to those layers later on, for example to concatenate the first layer with the fourth to create residual networks etc: 您可能使用不同的变量名称的唯一原因是稍后要引用这些层,例如,将第一层与第四层连接起来以创建剩余网络,等等:

x = Dense(64, activation='relu')(input)
y = Dense(64, activation='relu')(x)
z = Concatenate()([x,y])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM