简体   繁体   English

爱德华的权重的L2正则化

[英]L2 regularization of weights in Edward

I'm trying to understand how can we use the regularization with Edward models . 我试图了解我们如何才能将正则化与Edward模型一起使用 I'm still new to tensorflow (which is used as the backend of the Edward). 我仍然对tensorflow(用作Edward的后端)不熟悉。 Consider the model below, 考虑下面的模型,

# prior
w=Normal(loc=tf.zeros((d,c)),scale=tf.ones((d,c)))

# likelihood
y=Categorical(logits=tf.matmul(X,w))

# posterior
loc_qw = tf.get_variable("qw/loc", [d, c])
scale_qw = tf.nn.softplus(tf.get_variable("qw/scale", [d, c]))
qw = Normal(loc=loc_qw, scale=scale_qw)

# inference 
inference = ed.KLqp({w: qw, b: qb}, data={X:train_X, y:train_y})

I notice that Edward uses regularization losses in its loss function . 我注意到爱德华在损失函数中使用了正则化损失。 loss = -(p_log_lik - kl_penalty - reg_penalty)

However, I can't figure out how to apply the regularization losses to the Edward model. 但是,我不知道如何将正则化损失应用于Edward模型。 How can we add L1 or L2 regularization to the above model? 我们如何向上述模型添加L1或L2正则化?

Thanks! 谢谢!

I knew that Normal prior is equivalent to the l2 regularization. 我知道Normal优先级等同于l2正则化。 Imaging if the prior is not normal and if we want to regularize the parameters that we are trying the estimate during stochastic optimizations. 成像先验是否不正常,以及是否要规范化随机优化过程中尝试估算的参数。

I found that this could be done using the regularizer param of the tf variables in the posterior. 我发现可以使用后验tf变量的正则化器参数来完成此操作。

loc_qw = tf.get_variable("qw/loc", [d, c], regularizer=tf.contrib.layers.l2_regularizer(reg_scale) )

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM