简体   繁体   中英

L2 regularization of weights in Edward

I'm trying to understand how can we use the regularization with Edward models . I'm still new to tensorflow (which is used as the backend of the Edward). Consider the model below,

# prior
w=Normal(loc=tf.zeros((d,c)),scale=tf.ones((d,c)))

# likelihood
y=Categorical(logits=tf.matmul(X,w))

# posterior
loc_qw = tf.get_variable("qw/loc", [d, c])
scale_qw = tf.nn.softplus(tf.get_variable("qw/scale", [d, c]))
qw = Normal(loc=loc_qw, scale=scale_qw)

# inference 
inference = ed.KLqp({w: qw, b: qb}, data={X:train_X, y:train_y})

I notice that Edward uses regularization losses in its loss function . loss = -(p_log_lik - kl_penalty - reg_penalty)

However, I can't figure out how to apply the regularization losses to the Edward model. How can we add L1 or L2 regularization to the above model?

Thanks!

I knew that Normal prior is equivalent to the l2 regularization. Imaging if the prior is not normal and if we want to regularize the parameters that we are trying the estimate during stochastic optimizations.

I found that this could be done using the regularizer param of the tf variables in the posterior.

loc_qw = tf.get_variable("qw/loc", [d, c], regularizer=tf.contrib.layers.l2_regularizer(reg_scale) )

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM