简体   繁体   中英

How to create new Tensorflow op from composition of existing Tensorflow ops

I know how to use tf.py_func to create a new custom op that runs on CPU. I also know from the TF guide you can create a new op and its gradient in C++

What I am looking for is none of the above. I want to define a custom gradient function for a composition of TF ops. tf.register_gradients can be used along with gradient_override_map to define a custom gradient for an existing op, but how do you register a composition of TF ops as a new op in the first place?

A similar question has been asked here with no answer.

tfe.custom_gradient是您要使用的装饰器

I have provided three different ways of defining custom gradients in Tensorflow in this repo .

custom_gradient_with_py_func

In this approach we define a tf op using tf.py_func and assign a custom gradient function to it.

with g.gradient_override_map({"PyFunc": rnd_name}):
    return tf.py_func(func, inp, Tout, stateful=stateful, name=name)

custom_gradient_with_python:

In this approach we use a workaround to define a custom gradient for a composition of Tensorflow ops. We override the gradient of the identity op.

def python_func(x_in, name=None):
    with ops.name_scope(name):
        backward_func = tf.identity(x_in) # We'll later override the gradient of identity to deflect our desired gradient function.
        forward_func = tf.subtract(2 * tf.exp(x_in), x_in) 
        return backward_func + tf.stop_gradient(forward_func - backward_func) 

def my_op(func, inp, grad, name=None, victim_op='Identity'):
    # Need to generate a unique name to avoid duplicates.
    rnd_name = 'my_gradient' + str(np.random.randint(0, 1E+8))
    tf.RegisterGradient(rnd_name)(grad)
    g = tf.get_default_graph()
    with g.gradient_override_map({victim_op: rnd_name}):
        return func(inp, name=name)

custom_gradient_with_eager:

This approach uses tensorflow.contrib.eager available as of Tensorflow 1.5 to define custom gradients for a composition of tensorflow ops.

@tfe.custom_gradient
def python_func(x_in):
    def grad_func(grad):
        return grad * ((2 * tf.exp(x_in)) - 1)

    forward_func = tf.subtract(2 * tf.exp(x_in), x_in)
    return forward_func, grad_func

I am not sure how you managed to solve your problem but the names 'op_name' and 'some_name' in above solution would not show on the graph. So you will not be able to use gradient_override_map({"op_name": "SynthGrad"}).

One possible solution: If you have a custom tensorflow op x=f(a,b) in forwardpass but you want that to behave as g(a,b) in backwardpass, you can do something like this:

t=g(a,b) out=t+tf.stop_gradient(f(a,b)-t)

However, you need to define g(a,b) in C++ as a dummy/identity operator with a name. Later, you can use gradient_override_map.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM