I would like to build a neural net with Keras with Tensorflow backend which outputs an L2 normalized vector. I have tried the following but for some reason it does not normalize the output:
import keras.backend as K
input = Input(shape=input_shape)
...
dense7 = Dense(output_dim=3)(flatten6)
l2_norm = Lambda(lambda x: K.l2_normalize(x,axis=1))(dense7)
return Model(input=input, output=l2_norm)
I found the problem!
So I am using tensorflow as a backed and K.l2_normalize(x, axis) calls the tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None). Notice that this method has one extra parameter epsilon. And this method looks as follows:
with ops.name_scope(name, "l2_normalize", [x]) as name:
x = ops.convert_to_tensor(x, name="x")
square_sum = math_ops.reduce_sum(math_ops.square(x), dim, keep_dims=True)
x_inv_norm = math_ops.rsqrt(math_ops.maximum(square_sum, epsilon))
return math_ops.mul(x, x_inv_norm, name=name)
So if the output of the net contains numbers lower then epsilon (which is set to 1e-12 by default) then it is not normalized correctly which is what happens in my case.
You can use the function, which is called by tensorflow.keras.backend.l2_normalize to set the epsilon value:
from tensorflow.python.ops import nn
nn.l2_normalize(x, axis=None, epsilon=1e-12)
@thebeancounter You can define your own L2 Layer. For example to support masking: if there follow layers after the L2 Normalization, which depend on masking you should use the following:
class L2Layer(tf.keras.layers.Layer):
def __init__(self):
super(L2Layer, self).__init__()
self.supports_masking = True
def call(self, inputs, mask=None):
return K.l2_normalize(inputs, axis=2)
我认为您可以使用像这样的最后一层:
tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.