[英]TF 2.0 @tf.function example
In the tensorflow documentation at the autograph section we have the following code snippet 在签名部分的tensorflow文档中,我们具有以下代码片段
@tf.function
def train(model, optimizer):
train_ds = mnist_dataset()
step = 0
loss = 0.0
accuracy = 0.0
for x, y in train_ds:
step += 1
loss = train_one_step(model, optimizer, x, y)
if tf.equal(step % 10, 0):
tf.print('Step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
return step, loss, accuracy
step, loss, accuracy = train(model, optimizer)
print('Final step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
I have a small question concerning the step
variable, it's an integer and not a tensor, autograph supports built-in python type such as integer. 我对
step
变量有一个小问题,它是一个整数而不是张量,签名支持内置的python类型,例如integer。 Therefore the tf.equal(step%10,0)
could be changed to simply step%10 == 0
right ? 因此,可以将
tf.equal(step%10,0)
更改为简单的step%10 == 0
对吗?
Yes, you're right. 你是对的。 The integer variable step remains a Python variable even when converted to its graph representation.
即使将整数变量step转换为其图形表示形式,它仍然是Python变量。 You can see the conversion result by calling
tf.autograph.to_code(train.python_function)
. 您可以通过调用
tf.autograph.to_code(train.python_function)
来查看转换结果。
Without reporting all the code but only the step
variable related part, you'll see that 如果不报告所有代码,而仅报告与
step
变量相关的部分,您将看到
def loop_body(loop_vars, loss_1, step_1):
with ag__.function_scope('loop_body'):
x, y = loop_vars
step_1 += 1
Is still a python operation (otherwise it will be step_1.assign_add(1)
if step 1 was a tf.Tensor
). 仍然是python操作(否则,如果步骤1是
tf.Tensor
则它将为step_1.assign_add(1)
)。
For more information about autograph and tf.function I suggest reading the article https://pgaleone.eu/tensorflow/tf.function/2019/03/21/dissecting-tf-function-part-1/ that explains easily what happens when a function is converted. 有关签名和tf.function的更多信息,我建议阅读文章https://pgaleone.eu/tensorflow/tf.function/2019/03/21/dissecting-tf-function-part-1/轻松解释何时发生的情况函数被转换。
Although this is not visible in the generated code, the step variable will actually be autoboxed to a Tensor by the for loop which is being converted to a TF while_loop. 尽管这在生成的代码中不可见,但step变量实际上将通过for循环自动装箱到Tensor,该循环被转换为TF while_loop。
You can verify that by adding a print statement: 您可以通过添加打印语句来验证这一点:
loss = train_one_step(model, optimizer, x, y)
print(step)
if tf.equal(step % 10, 0):
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.