简体   繁体   English

使用Python Keras进行图层归一化

[英]Layer Normalization with Python Keras

I'm studying the paper "An Introduction to Deep Learning for the Physical Layer". 我正在研究“物理层深度学习入门”一文。
While implementing the proposed network with python keras , I should normalize output of some layer. 在使用python keras实现拟议的网络时,我应该标准化某些层的输出。

One way is simple L2 Normalization ( ||X||^2 = 1 ), where X is a tensor of former layer output. 一种方法是简单的L2规范化( ||X||^2 = 1 ),其中X是前一层输出的张量。 I can implement simple L2 Normalization by the following code: 我可以通过以下代码实现简单的L2规范化:

from keras import backend as K
Lambda(lambda x: K.l2_normalize(x,axis=1))

The other way, what I want to know, is ||X||^2 ≤ 1 . 我想知道的另一种方式是||X||^2 ≤ 1 Is there any way that constrains the value of layer outputs? 有什么方法可以限制图层输出的值?

You can apply constraint on layer wights (kernels) for some keras layers. 您可以将约束应用于某些keras图层的图层重量(内核)。 For example on a Dense() layer like: 例如在Dense()层上,例如:

from keras.constraints import max_norm
from keras.layers import Dense
model.add(Dense(units, kernel_constraint=max_norm(1.)))

But keras layer does not accept an activity_constraint argument, However they accept activity_regularizer and you can use that to implement the first kind of regularization easier). 但是keras层不接受activity_constraint参数,但是它们接受activity_regularizer ,您可以使用它来更容易地实现第一种正则化。

You can also clip output values of any layer to have maximum norm 1.0 (although I'm not sure if this is what you're looking for). 您还可以裁剪任何图层的输出值以具有最大范数1.0 (尽管我不确定这是否是您要的内容)。 For example if you're using a tensorflow backend, you can define a custom activation layer that clips the value of the layer by norm like: 例如,如果您使用的是tensorflow后端,则可以定义一个自定义激活图层,该图层按tensorflow该图层的值,例如:

import tensorflow as tf
def norm_clip(x):
    return tf.clip_by_norm(x, 1, axes=[1])

And use it in your model like: 并在模型中使用它,例如:

model.add(Dense(units, activation=norm_clip))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM