[英]How to normalize Keras network output in a regression problem that demands output with unit L2 norm?
My regression problem requires that the network output y
has unit norm ||y|| = 1.
我的回归问题要求网络输出y
具有单位范数||y|| = 1.
||y|| = 1.
. ||y|| = 1.
。 I would like to impose that as a Lambda
layer after the linear activation: 我想在线性激活后将其强加为Lambda
层:
from keras import backend as K
...
model.add(Dense(numOutputs, activation='linear'))
model.add(Lambda(lambda x: K.l2_normalize(x)))
The backend is TensorFlow. 后端是TensorFlow。 The code compiles but the network predicts output vectors with distinct norms (the norm is not 1 and varies). 代码可以编译,但网络会预测具有不同范数的输出向量(范数不为1,并且变化)。
Any hints regarding what I am doing wrongly? 关于我做错事情的任何提示吗?
The problem is that you haven't passed the axis
argument to the K.l2_normalize
function. 问题是您尚未将axis
参数传递给K.l2_normalize
函数。 As a result it would normalize all the elements in the whole batch so that their norm would be equal to one. 结果,它将标准化整个批处理中的所有元素,以使它们的范数等于1。 To resolve this, just pass axis=-1
to normalize over the last axis: 要解决此问题,只需传递axis=-1
以对最后一个轴进行归一化:
model.add(Lambda(lambda x: K.l2_normalize(x, axis=-1)))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.