Everytime we call tf.layers.conv2d
, Tensorflow will automatically creates new kernel, bias and new name for this tensor by adding _1
, _2
, like:
<tf.Variable 'conv2d/kernel:0' shape=(3, 3, 10, 10) dtype=float32_ref>,
<tf.Variable 'conv2d/bias:0' shape=(10,) dtype=float32_ref>,
<tf.Variable 'conv2d_1/kernel:0' shape=(3, 3, 10, 10) dtype=float32_ref>,
<tf.Variable 'conv2d_1/bias:0' shape=(10,) dtype=float32_ref>
conv2d/BiasAdd:0
conv2d_1/BiasAdd:0
If I would like to define a similar layer with variables inside, for example,
def some_layer(input):
gamma = tf.get_variable(name='gamma', shape=[10], dtype=tf.float32,
initializer=tf.constant_initializer(1.0))
x = some_layer(input)
y = some_layer(input)
It will cause ValueError: Variable gamma already exists
. I know there are methods to give each variable a name scope or variable scope, but I am wondering is there any method to automatically create new variables gamma_1:0
, gamma_2:0
like conv2d/kernel:0
, conv2d_1/kernel:0
after calling tf.layers.conv2d
.
The definition of tf.layers.conv2d
gives me no hint.
The Tensorflow tutorial custom layers gives the answer. If we need to build a custom layer, we will need to extend the class tf.keras.layers.Layer
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.