简体   繁体   中英

What is the correct way to chain layers in Keras functional api?

I am learning how to use Keras functional api, and my question is quite simple, but i have not been able to find any answer in the internet. What is the correct way of naming chained layers in Keras? Should their names be same, or different? Is there any convention or rule about it?

Let me show you two examples. First one is directly from keras functional api guide

x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)

The second example is my own:

second = Dense(64, activation='relu')(first)
third = Dense(64, activation='relu')(second)
fourth = Dense(64, activation='relu')(third)

I tried both methods and it gives me same performance for both options. Is there any functional difference between these two ways? If not, is there at least any 'formal convention'?

No there isn't. Selecting the variable names is purely up to you. As far as the computation graph (your network) is concerned, both construct the same model.

The only reason you might use different variable names is to refer to those layers later on, for example to concatenate the first layer with the fourth to create residual networks etc:

x = Dense(64, activation='relu')(input)
y = Dense(64, activation='relu')(x)
z = Concatenate()([x,y])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM