简体   繁体   中英

Should I use reguaization with Loss function or NN layer?

I'm confused regarding the place of using regularization. In the theory, I saw regularization has been used with the Loss function.

在此处输入图像描述

But in the time implementation in Keras, I saw regularization has been used in the neural network layer.

from keras import regularizers
model.add(Dense(64, input_dim=64, kernel_regularizer=regularizers.l2(0.01)
model.add(Dense(28, input_dim=64, kernel_regularizer=regularizers.l1(0.05)

Here I used L1 and L2 loss in different layers. So How the final loss function will be calculated?

Taken from Keras documentation :

Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes.

Indeed, it takes the error terms of l1 / l2 regularization and adds them to the loss for that layer during the backpropagation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM