简体   繁体   中英

Layer Normalization with Python Keras

I'm studying the paper "An Introduction to Deep Learning for the Physical Layer".
While implementing the proposed network with python keras , I should normalize output of some layer.

One way is simple L2 Normalization ( ||X||^2 = 1 ), where X is a tensor of former layer output. I can implement simple L2 Normalization by the following code:

from keras import backend as K
Lambda(lambda x: K.l2_normalize(x,axis=1))

The other way, what I want to know, is ||X||^2 ≤ 1 . Is there any way that constrains the value of layer outputs?

You can apply constraint on layer wights (kernels) for some keras layers. For example on a Dense() layer like:

from keras.constraints import max_norm
from keras.layers import Dense
model.add(Dense(units, kernel_constraint=max_norm(1.)))

But keras layer does not accept an activity_constraint argument, However they accept activity_regularizer and you can use that to implement the first kind of regularization easier).

You can also clip output values of any layer to have maximum norm 1.0 (although I'm not sure if this is what you're looking for). For example if you're using a tensorflow backend, you can define a custom activation layer that clips the value of the layer by norm like:

import tensorflow as tf
def norm_clip(x):
    return tf.clip_by_norm(x, 1, axes=[1])

And use it in your model like:

model.add(Dense(units, activation=norm_clip))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM