繁体   English   中英

使用TensorFlow重整化权重矩阵

[英]Renormalize weight matrix using TensorFlow

我想在我的TensorFlow图中为几个权重矩阵添加一个最大范数约束,ala Torch的renorm方法。

如果任何神经元的权重矩阵的L2范数超过max_norm ,我想缩减其权重,使其L2范数正好是max_norm

使用TensorFlow表达这一点的最佳方式是什么?

这是一个可能的实现:

import tensorflow as tf

def maxnorm_regularizer(threshold, axes=1, name="maxnorm", collection="maxnorm"):
    def maxnorm(weights):
        clipped = tf.clip_by_norm(weights, clip_norm=threshold, axes=axes)
        clip_weights = tf.assign(weights, clipped, name=name)
        tf.add_to_collection(collection, clip_weights)
        return None # there is no regularization loss term
    return maxnorm

以下是您将如何使用它:

from tensorflow.contrib.layers import fully_connected
from tensorflow.contrib.framework import arg_scope

with arg_scope(
        [fully_connected],
        weights_regularizer=max_norm_regularizer(1.5)):
    hidden1 = fully_connected(X, 200, scope="hidden1")
    hidden2 = fully_connected(hidden1, 100, scope="hidden2")
    outputs = fully_connected(hidden2, 5, activation_fn=None, scope="outs")

max_norm_ops = tf.get_collection("max_norm")

[...]

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(n_epochs):
        for X_batch, y_batch in load_next_batch():
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
            sess.run(max_norm_ops)

这将创建一个3层神经网络,并在每一层(阈值为1.5)对其进行最大范数正则化训练。 我只是尝试过,似乎工作。 希望这可以帮助! 欢迎提出改进建议。 :)

笔记

此代码基于tf.clip_by_norm()

>>> x = tf.constant([0., 0., 3., 4., 30., 40., 300., 400.], shape=(4, 2))
>>> print(x.eval())
[[   0.    0.]
 [   3.    4.]
 [  30.   40.]
 [ 300.  400.]]
>>> clip_rows = tf.clip_by_norm(x, clip_norm=10, axes=1)
>>> print(clip_rows.eval())
[[ 0.          0.        ]
 [ 3.          4.        ]
 [ 6.          8.        ]  # clipped!
 [ 6.00000048  8.        ]] # clipped!

如果需要,您还可以剪辑列:

>>> clip_cols = tf.clip_by_norm(x, clip_norm=350, axes=0)
>>> print(clip_cols.eval())
[[   0.            0.        ]
 [   3.            3.48245788]
 [  30.           34.82457733]
 [ 300.          348.24578857]]
                # clipped!

使用Rafał的建议和TensorFlow的clip_by_norm 实现 ,这是我想出的:

def renorm(x, axis, max_norm):
    '''Renormalizes the sub-tensors along axis such that they do not exceed norm max_norm.'''
    # This elaborate dance avoids empty slices, which TF dislikes.
    rank = tf.rank(x)
    bigrange = tf.range(-1, rank + 1)
    dims = tf.slice(
                tf.concat(0, [tf.slice(bigrange, [0], [1 + axis]),
                              tf.slice(bigrange, [axis + 2], [-1])]),
                [1], rank - [1])

    # Determine which columns need to be renormalized.
    l2norm_inv = tf.rsqrt(tf.reduce_sum(x * x, dims, keep_dims=True))
    scale = max_norm * tf.minimum(l2norm_inv, tf.constant(1.0 / max_norm))

    # Broadcast the scalings
    return tf.mul(scale, x)

它似乎具有二维矩阵的期望行为,应该推广到张量:

> x = tf.constant([0., 0., 3., 4., 30., 40., 300., 400.], shape=(4, 2))
> print x.eval()
[[   0.    0.]  # rows have norms of 0, 5, 50, 500
 [   3.    4.]  # cols have norms of ~302, ~402
 [  30.   40.]
 [ 300.  400.]]
> print renorm(x, 0, 10).eval()
[[ 0.          0.        ]  # unaffected
 [ 3.          4.        ]  # unaffected
 [ 5.99999952  7.99999952]  # rescaled
 [ 6.00000048  8.00000095]] # rescaled
> print renorm(x, 1, 350).eval()
[[   0.            0.        ]  # col 0 is unaffected
 [   3.            3.48245788]  # col 1 is rescaled
 [  30.           34.82457733]
 [ 300.          348.24578857]]

看看clip_by_norm函数,它正是这样做的。 它需要一个张量作为输入并返回按比例缩小的张量。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM