[英]Does tf.nn.l2_loss and tf.contrib.layers.l2_regularizer serve the same purpose of adding L2 regularization in tensorflow?
看來,張量流中的L2正則化可以通過兩種方式實現:
(i)使用tf.nn.l2_loss或(ii)使用tf.contrib.layers.l2_regularizer
這兩種方法都可以達到同樣的目的嗎? 如果它們不同,它們有什么不同?
他們做同樣的事情(至少現在)。 唯一的區別是, tf.contrib.layers.l2_regularizer
相乘的結果tf.nn.l2_loss
的scale
。
查看tf.contrib.layers.l2_regularizer
[ https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/layers/python/layers/regularizers.py]的實現 :
def l2_regularizer(scale, scope=None):
"""Returns a function that can be used to apply L2 regularization to weights.
Small values of L2 can help prevent overfitting the training data.
Args:
scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
scope: An optional scope name.
Returns:
A function with signature `l2(weights)` that applies L2 regularization.
Raises:
ValueError: If scale is negative or if scale is not a float.
"""
if isinstance(scale, numbers.Integral):
raise ValueError('scale cannot be an integer: %s' % (scale,))
if isinstance(scale, numbers.Real):
if scale < 0.:
raise ValueError('Setting a scale less than 0 on a regularizer: %g.' %
scale)
if scale == 0.:
logging.info('Scale of 0 disables regularizer.')
return lambda _: None
def l2(weights):
"""Applies l2 regularization to weights."""
with ops.name_scope(scope, 'l2_regularizer', [weights]) as name:
my_scale = ops.convert_to_tensor(scale,
dtype=weights.dtype.base_dtype,
name='scale')
return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
return l2
您感興趣的行是:
return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
所以在實踐中, tf.contrib.layers.l2_regularizer
在內部調用tf.nn.l2_loss
並簡單地將結果乘以scale
參數。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.