简体   繁体   中英

Sum of second order derivatives in Tensorflow

I have a function in Tensorflow, let's call in f that takes as input a tensor x on the form [None, N, M] , and outputs a number for each row, ie the output is a tensor with form [None] for some arbitrary number of rows.

I want to compute the Laplacian of f , which in my case means that is I want to compute a tensor y of shape [None] with rows given by

\\ SQRT {FOO}

I can get the first order gradient the way I want to. For the sake of this example, say my code is like so:

import tensorflow as tf
x = tf.Variable([[[0.5, 1, 2], [3, 4, 5]]] , dtype=tf.float64)
y = tf.reduce_sum(x*x*x/3, axis=[1, 2])
grad = tf.gradients(y, x)[0]

which gives as expected

grad: [[[ 0.25  1.    4.  ]
        [ 9.   16.   25.  ]]]

I thought that I could now do the same on grad to get the second order:

lap = tf.gradients(grad, x)

But this gives

lap: [-117.125]

which is nothing like what I would expect. I would have wanted

lap: [[[ 1  2  4]
       [ 6  8 10]]]

or just the sum for each row, like so:

lap: [ 31 ]

Obviously, this doesn't result in what I want, and I'm a bit stumped on how to fix it. Any help?

I've also tried tf.hessians , which kind off works:

hess = tf.hessians(y, x)

which gives

hess:
 [array([[[[[[ 1.,  0.,  0.],
             [ 0.,  0.,  0.]]],
           [[[ 0.,  2.,  0.],
             [ 0.,  0.,  0.]]],
           [[[ 0.,  0.,  4.],
             [ 0.,  0.,  0.]]]],

           [[[[ 0.,  0.,  0.],
              [ 6.,  0.,  0.]]],
            [[[ 0.,  0.,  0.],
              [ 0.,  8.,  0.]]],
            [[[ 0.,  0.,  0.],
              [ 0.,  0., 10.]]]]]])]

This has the correct numbers in there, but this also computes many, many more derivatives than what I need, and picking out numbers from this mess seems very inefficient.

Secondary question : I think the issue is related to tf.gradients(ys, xs) returning "derivatives of sum of ys wrt x in xs.". I don't want derivatives of sums, so I'm thinking I might need to run tf.gradients several times on subslices of grad . But why do I get the full first order gradient with the code above? As far as I can tell, no summing has been made, as I get all the derivatives I want.

Side note : If it helps if x is of shape [None, N*M] , then I can refactor the rest of the code to work with this.

It is kind of amusing because the following works for me perfectly.

Input Code :

import tensorflow as tf
x = tf.Variable([[[0.5, 1, 2], [3, 4, 5]]] , dtype=tf.float64)
y = tf.reduce_sum(x*x*x/3, axis=[1, 2])
grad = tf.gradients(y, x)[0]
grad2 = tf.gradients(grad, x)
init_op = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init_op)
    g1, g2 = sess.run([grad, grad2])

print('First order : {}'.format(g1))
print('Second order : {}'.format(g2))

Output :

First order : [[[ 0.25  1.    4.  ]
  [ 9.   16.   25.  ]]]
Second order : [array([[[ 1.,  2.,  4.],
        [ 6.,  8., 10.]]])]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM