简体   繁体   中英

Layer Normalization, with average power constraints

I'm studying the paper "An Introduction to Deep Learning for the Physical Layer". While implementing the proposed network with python Keras, I should normalize some values, output of former layer.
One way is simple L2 Normalization, ||X||^2 = 1, where X is a tensor of former layer output.
as a code

from keras import backend as K
Lambda(lambda x: K.l2_normalize(x,axis=1))

The other way, what I want to know, is ||X||^2 ≤ 1.
Is there any way that constrains the value of layer outputs?

I solved it!
That normalization can be easily implemented by using 'Batch Normalization'.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM