简体   繁体   中英

How to normalize Keras network output in a regression problem that demands output with unit L2 norm?

My regression problem requires that the network output y has unit norm ||y|| = 1. ||y|| = 1. . I would like to impose that as a Lambda layer after the linear activation:

from keras import backend as K  
...  
model.add(Dense(numOutputs, activation='linear'))  
model.add(Lambda(lambda x: K.l2_normalize(x)))  

The backend is TensorFlow. The code compiles but the network predicts output vectors with distinct norms (the norm is not 1 and varies).

Any hints regarding what I am doing wrongly?

The problem is that you haven't passed the axis argument to the K.l2_normalize function. As a result it would normalize all the elements in the whole batch so that their norm would be equal to one. To resolve this, just pass axis=-1 to normalize over the last axis:

model.add(Lambda(lambda x: K.l2_normalize(x, axis=-1)))  

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM