[英]how to write this equation in TensorFlow2? (4x+2 = 0)
This code does not run in TensorFlow 2 what is the code?这段代码不能在TensorFlow 2中运行,代码是什么?
import tensorflow as tf
# the equation is : 4x+2 = 0
unknownvalue = tf.Variable(0.0)
a = tf.constant(4.0)
b = tf.constant(2.0)
c = tf.multiply(unknownvalue,a) # 4x
equation = tf.add(c,b) # 4x+2
zerovalue = tf.constant(0.0)
diff = tf.square(equation-zerovalue) # differnce is : 4x+2 - 0
solving = tf.train.GradientDescentOptimizer(0.01).minimize(diff)
init = tf.global_variables_initializer()
TF2 changed the way to build training loops since TF1.自 TF1 以来,TF2 改变了构建训练循环的方式。 I encourage you to read this guide : Writing a training loop from scratch , to learn a bit more how to do so.
我鼓励您阅读本指南: 从头开始编写训练循环,以了解更多如何执行此操作。
Here is a straightforward implementation in TF2 of your tf1 code :这是 tf1 代码在 TF2 中的直接实现:
import tensorflow as tf
x = tf.Variable(0.0)
optimizer = tf.optimizers.SGD(1e-2)
for idx in range(1,26):
with tf.GradientTape() as tape:
# the equation is : 4x+2 = 0
equation = x*4 + 2 # 4x+2
loss = tf.square(equation)
grad = tape.gradient(loss, [x])
optimizer.apply_gradients(zip(grad, [x]))
if not idx%5:
tf.print(f"Iteration:{idx},loss:{loss:.4f},x:{x.numpy():.4f}")
That results in这导致
Iteration:5,loss:0.1829,x:-0.4273
Iteration:10,loss:0.0039,x:-0.4894
Iteration:15,loss:0.0001,x:-0.4985
Iteration:20,loss:0.0000,x:-0.4998
Iteration:25,loss:0.0000,x:-0.5000
Tensorflow 2 code is much more simple and intuitive (except may be the GradientTape part, if you are not used to it): Tensorflow 2 的代码更加简单直观(除了 GradientTape 部分,如果你不习惯的话):
import tensorflow as tf
# the equation is : 4x+2 = 0
unknownvalue = tf.Variable(0.0)
with tf.GradientTape as tape:
diff = (4 * unknownvalue + 2 - 0) ** 2
grads = tape.gradient(diff, unknownvalue)
tf.optimizers.SGD(0.01).apply_gradients(zip(grads, [unknownvalue])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.