简体   繁体   English

张量数组的归一化

[英]Normalization of a tensor array

Experts. 专家。 I am new to DNN and Python. 我是DNN和Python的新手。 I am trying to use tensorflow to do some DNN learning work. 我正在尝试使用张量流来做一些DNN学习工作。 During my working, I came across a problem that I myself cannot solve. 在工作中,我遇到了一个我自己无法解决的问题。 In one step, I would like to normalize a tensor called "inputs". 第一步,我想将张量标准化为“输入”。 The normalization is simply take the maximum abs of a vector, and divide all the elements of the vector my the maximum abs. 归一化简单地取向量的最大abs,并将向量的所有元素除以最大abs。 But the following problem occured: 但是发生了以下问题:

ValueError Traceback (most recent call last) in () ()中的ValueError追溯(最近一次通话)

 55             tmp_index = tf.argmax(tmp_abs,0)
 56             tmp_index1 = tf.cast(tmp_index,dtype = tf.int32)
---> 57             inputs = inputs/tmp_abs[tmp_index1]
 58 
 59         if index != len(Layers)-1:

InvalidArgumentError: Shape must be rank 1 but is rank 2 for 'hidden2_3/strided_slice' (op: 'StridedSlice') with input shapes: [?,1], [1,1], [1,1], [1]. InvalidArgumentError:对于输入形状为[?,1],[1,1],[1,1],[1]的'hidden2_3 / strided_slice'(op:'StridedSlice'),形状必须为等级1,但等级2。

Any advice will be appreciated. 任何建议将被认真考虑。 Thanks! 谢谢!

# input features and labels
x_ = tf.placeholder(name="input", shape=[None, 1], dtype=np.float32)
y_ = tf.placeholder(name="output", shape=[None, 1], dtype=np.float32)

# tf variables
Hidden = []

# Hidden Layers
for index, num_hidden in enumerate(Layers):
    with tf.name_scope("hidden{}".format(index+1)):
        if index == 0:
            weights = tf.Variable(tf.truncated_normal([Fea_Size,num_hidden], stddev = get_stddev(Fea_Size,num_hidden)))
            bias = tf.Variable(tf.zeros([num_hidden]))
        else:
            weights = tf.Variable(tf.truncated_normal([Layers[index-1], num_hidden], stddev = get_stddev(Layers[index-1], num_hidden)))
            bias = tf.Variable(tf.zeros([num_hidden]))

        inputs = x_ if index == 0 else Hidden[index-1]
        if index !=0:
            tmp_abs = tf.abs(inputs)
            tmp_index = tf.argmax(tmp_abs,0)
            tmp_index1 = tf.cast(tmp_index,dtype = tf.int32)
            inputs = inputs/tmp_abs[tmp_index1]

        if index != len(Layers)-1:
            Hidden.append(tf.nn.relu(tf.matmul(inputs,weights) + bias))
        else:
            nonlin_model = tf.nn.relu(tf.matmul(inputs,weights) + bias)

nonlin_loss = tf.reduce_mean(tf.pow(nonlin_model - y_, 2), name='cost')
train_step_nonlin = tf.train.GradientDescentOptimizer(0.01).minimize(nonlin_loss)

The problem is with the fact that your inputs has invariant shape for axis 0 . 问题在于您的inputsaxis 0形状不变。

You can instead use: 您可以改用:

inputs = inputs/tf.reduce_max(tf.abs(inputs))

tf.abs returns the absolute value of inputs . tf.abs返回inputs的绝对值。 tf.reduce_max returns the maximum value. tf.reduce_max返回最大值。

Here's a code snippet that worked for me: 这是一个对我有用的代码片段:

inputs = tf.placeholder(shape=[None, 1], dtype = tf.float32)
inputs_normal = inputs/tf.reduce_max(tf.abs(inputs))

all_pos = sess.run(inputs_normal, feed_dict={inputs:[[1],[2],[3]]})
all_neg = sess.run(inputs_normal, feed_dict={inputs:[[-1],[-2],[-3]]})
pos_neg = sess.run(inputs_normal, feed_dict={inputs:[[1],[2],[-3]]})

Here are the values: 这些是值:

all_pos = array([[0.33333334],
                 [0.6666667 ],
                 [1.        ]], dtype=float32)

all_neg = array([[-0.33333334],
                 [-0.6666667 ],
                 [-1.        ]], dtype=float32)

pos_neg = array([[ 0.33333334],
                      [ 0.6666667 ],
                      [-1.        ]], dtype=float32)

I've shown it for 2-D tensors, but it should work for higher dimensional tensors as well. 我已经针对2-D张量显示了它,但是它也应该适用于高维张量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM