简体   繁体   中英

TensorFlow 2.0: do you need a @tf.function decorator on top of each function?

In TensorFlow 2.0 (still alpha version right now) I know that you can use the decorator @tf.function in order to turn plain Python code into graph. Do I have to put @tf.function on top of each function for every time I want that? And is @tf.function considering just the following function block?

While the decorator @tf.function applies to the function block immediately following it, any functions called by it will be executed in graph mode as well. See the Effective TF2 guide where it states:

In TensorFlow 2.0, users should refactor their code into smaller functions which are called as needed. In general, it's not necessary to decorate each of these smaller functions with tf.function; only use tf.function to decorate high-level computations - for example, one step of training, or the forward pass of your model.

@tf.function converts a Python function to its graph representation.

The pattern to follow is to define the training step function, that's the most computationally intensive function, and decorate it with @tf.function .

Usually, the code looks like:

#model,loss, and optimizer defined previously

@tf.function
def train_step(features, labels):
   with tf.GradientTape() as tape:
        predictions = model(features)
        loss_value = loss(labels, predictions)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    return loss_value

for features, labels in dataset:
    lv = train_step(features, label)
    print("loss: ", lv)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM