简体   繁体   中英

Matrix norm in TensorFlow

I need to compute the Frobenius norm in order to achieve this formula using the TensorFlow framework:

公式

where w is a matrix with 50 rows and 100 columns.

I tried to write something, but I don't understand how to fill out the axis argument.

tf.pow(
    tf.norm(x, ord='fro', axis=?), 2
)

According to the TensorFlow docs I have to use a 2-tuple (or a 2-list) because it determines the axies in tensor over which to compute a matrix norm , but I simply need a plain Frobenius norm. In SciPy , for example, I can do it without specify any axis.

So, what should I use as axis to emulate the SciPy function?

So the Frobenius norm is a sum over a nxm matrix, but tf.norm allows to process several vectors and matrices in batch.

To better understand, imagine you have a rank 3 tensor:

t = [[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18]]]

It can be seen as several matrices aligned over one direction, but the function can't figure by itself which one. It could be either a batch of the following matrices:

[2, 4, 6] , [8 ,10, 12], [14, 16, 18]

or

[2 8 14], [4, 10, 16], [6, 12, 18]

So basically axis tells which directions you want to consider when doing the summation in the Frobenius norm.

In your case, any of [1,2] or [-2,-1] would do the trick.

Independent of the number of dimensions of the tensor,

tf.sqrt(tf.reduce_sum(tf.square(w)))

should do the trick.

Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass axis=[-2,-1] instead of axis=None to make sure that matrix norms are computed.

I just tested and [-2,-1] works.

It seems to me you are better off simply calling

tf.reduce_sum(tf.multiply(x, x))

Calling norm which square-roots the above result, then pow which work for any power and therefore potentially uses an elaborate algorithm, is overkill.

Try axis=(0,1). I think, it will solve your problem!!

Frobenius norm does not work for matrix. You need to create vectors.

  1. reshape your array to the batchsize,-1 .
  2. Use tf.norm(reshaped_data, ord= 'fro', axis = (0, 1))
  3. Using reshape on the tensorflow eager execution may through an error. from version = 2.5 onwards use

import tensorflow.python.ops.numpy_ops.np_config as np_config np_config.enable_numpy_behavior()

for example of how I am using this :

heat_difference = gt_hm - pd_hm
heat_difference = heat_difference.reshape(batch_size, -1)
hm_loss = tf.square(tf.norm(heat_difference, ord='fro', axis=(0, 1)) / batch_size)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM