简体   繁体   中英

Implementing heaviside step function in TensorFlow

I want to create heaviside step function in TensorFlow. Since Heaviside function is not differentiable I also need to choose derivative approximation and define custom gradient so full implementation looks like this:

import tensorflow as tf


@tf.RegisterGradient("HeavisideGrad")
def _heaviside_grad(unused_op: tf.Operation, grad: tf.Tensor):
    x = unused_op.inputs[0]
    # During backpropagation heaviside behaves like sigmoid
    return tf.sigmoid(x) * (1 - tf.sigmoid(x)) * grad


def heaviside(x: tf.Tensor, g: tf.Graph = tf.get_default_graph()):
    custom_grads = {
        "Sign": "HeavisideGrad"
    }
    with g.gradient_override_map(custom_grads):
        # TODO: heaviside(0) currently returns 0. We need heaviside(0) = 1
        sign = tf.sign(x)
        # tf.stop_gradient is needed to exclude tf.maximum from derivative
        step_func = sign + tf.stop_gradient(tf.maximum(0.0, sign) - sign)
        return step_func

There is one caveat in my implementation: tf.sign(0) returns zero value so heaviside(0) also returns zero and I want heaviside(0) to return 1. How can I achieve such behavior?

A very hacky way would be to use

1 - max(0.0, sign(-x)) 

as your step function instead of

max(0.0, sign(x))

Another option would be to use greater_equal and cast the result to your desired type, and override its gradient with the sigmoid override you already have.

Easiest fix for you code is to add a small number to the result of tf.sign() and take the sign again. This will result in getting a 1 for 0:

sign = tf.sign ( tf.sign( x ) + 0.1 )

Ok, I think I figured it out. Many thanks to etarion who pointed out the correct approach to solve my issue.

So the basic idea is to use tf.greater_equal instead of combination of tf.sign and maximum . The custom gradient is applied to tf.identity operation.

Here is updated implementation of heaviside function:

import tensorflow as tf

@tf.RegisterGradient("HeavisideGrad")
def _heaviside_grad(unused_op: tf.Operation, grad: tf.Tensor):
    return tf.maximum(0.0, 1.0 - tf.abs(unused_op.inputs[0])) * grad


def heaviside(x: tf.Tensor, g: tf.Graph = tf.get_default_graph()):
    custom_grads = {
        "Identity": "HeavisideGrad"
    }
    with g.gradient_override_map(custom_grads):
        i = tf.identity(x, name="identity_" + str(uuid.uuid1()))
        ge = tf.greater_equal(x, 0, name="ge_" + str(uuid.uuid1()))
        # tf.stop_gradient is needed to exclude tf.to_float from derivative
        step_func = i + tf.stop_gradient(tf.to_float(ge) - i)
        return step_func

This would make the unit step function, using only TensorFlow APIs so the result is still a tensor:

#in Eager mode
def heaviside(v):
  return 1-tf.reduce_max(tf.constant([0,-tf.sign(v).numpy()], tf.float32));

In TensorFlow 2, use the decorator @tf.custom_gradient better:

@tf.custom_gradient
def heaviside(X):
  #This custom op is converted to graph, no 'if', 'else' allowed,
  #so use 'tf.cond'
  List = [];

  for I in range(BSIZE): #Batch size
    Item = tf.cond(X[I]<0, lambda: tf.constant([0], tf.float32), 
                           lambda: tf.constant([1], tf.float32));  
    List.append(Item);

  U = tf.stack(List);

  #Heaviside half-maximum formula
  #U = (tf.sign(X)+1)/2;

  #Div is differentiation intermediate value
  def grad(Div):
    return Div*1; #Heaviside has no gradient, use 1.

  return U,grad;

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM