简体   繁体   中英

Normalized output of keras layer

I want to create a Keras model with Tensorflow background that returns a vector with norm 1. For this purpose, the model ends with the next layer:

main_network = Lambda(lambda t: K.l2_normalize(t, axis=1))(x)

I have also created a test in which I only create the model and, without training, I make a random prediction to check that the output has norm 1. But the test fails:

AssertionError: 0.37070954 != 1 within 0.1 delta

So the Lambda layer is not working correctly since it is not normalizing the output. I tried different values for the axis parameter and with all possible values, the test fails. But am I missing?

Ok, I fixed the problem. For same reason, K.l2_normalize does not work with very small numbers, so I simply changed the line by this one:

main_network = Lambda(lambda t: K.l2_normalize(1000*t, axis=1))(x)

And the now the test works right!!

L2 normalize formula is:

       x
---------------
sqrt(sum(x**2))

For example, for an input [3, 1, 4, 3, 1] is [3/6, 1/6, 4/6, 3/6, 1/6] = 12/6 which indicates the output of L2-normalize is not necessary to be one . If you need something that normalizes the output to the sum of 1, you probably need Softmax :

Here is an example that you can check the output of the softmax is one:

import tensorflow as tf
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.layers import Lambda

x = tf.keras.layers.Input(tensor=tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32))
n_layer = Lambda(lambda t: K.softmax(t, axis=-1))(x)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(n_layer.eval())

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM