繁体   English   中英

使用 tf.optimizers.Adam.minimize() 时,object 不可调用

[英]object is not callable, when using tf.optimizers.Adam.minimize()

我是 tensorflow (2.0) 的新手,所以我想通过简单的线性回归来缓解。 我有以下代码,但我不知道为什么它是错误的。

我已经尝试过使用文档,但到目前为止我还没有答案。

x = np.random.normal(loc=10., scale = 0.1, size=170)
y = np.repeat(10.,170)
a_init = tf.random_normal_initializer()
a = tf.Variable(initial_value=a_init(shape = [1], dtype = 'float32'),trainable=True)
pred = tf.multiply(a,x)
loss = tf.nn.l2_loss(pred-y)
optim = tf.optimizers.Adam(lr = 0.002)
entreno = optim.minimize(loss, [a])

我收到以下错误,

Traceback (most recent call last)
<ipython-input-45-e1a191781d0a> in <module>
      2 loss = tf.nn.l2_loss(pred-y)
      3 optim = tf.optimizers.Adam(lr = 0.002)
----> 4 entreno = optim.minimize(loss, [a])

TypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable

如果有帮助,我有一个 tensorflow 1 代码:

import tensorflow
import numpy as np
tf = tensorflow.compat.v1
x = np.random.normal(loc=1.,scale=0.1, size = 220)
y = np.repeat(14.37,220)
tf.disable_eager_execution()
x_d = tf.placeholder(shape = [1], dtype=tf.float32)
y_t = tf.placeholder(shape = [1], dtype = tf.float32)
A = tf.Variable(tf.random_normal(shape=[1]))
my_pred = tf.multiply(A,x_d)
loss = tf.square(my_pred-y_t)
optim = tf.train.GradientDescentOptimizer(learning_rate=0.02)
train_step = optim.minimize(loss)
init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)
for _ in range(241):
    idx = np.random.choice(220)
    ranx = [x[idx]]
    rany = [y[idx]]
    session.run(train_step, feed_dict ={x_d : ranx, y_t : rany})
    if _%20 == 0:
        print("A = {}, Loss : {}".format(session.run(A), session.run(loss, feed_dict={x_d:ranx, y_t:rany})))

Tensorflow 有一个关于如何做到这一点的指南: https://www.tensorflow.org/guide/eager但代码是

class Model(tf.keras.Model):
    def __init__(self):
        super(Model, self).__init__()
        self.W = tf.Variable(0.1, name='weight')

    def call(self, inputs):
        return inputs * self.W


x = tf.random.normal(mean=1, stddev=0.2, shape=[170])
noise = tf.random.normal([170])
y = x*10 + noise

# The loss function to be optimized
def loss(model, inputs, targets):
    error = model(inputs) - targets
    return tf.reduce_mean(tf.square(error))

def grad(model, inputs, targets):
    with tf.GradientTape() as tape:
        loss_value = loss(model, inputs, targets)
    return tape.gradient(loss_value, [model.W,])

# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.

model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)

print("Initial loss: {}".format(loss(model, x, y)))

# Training loop
for i in range(300):
    grads = grad(model, x, y)
    optimizer.apply_gradients(zip(grads, [model.W,]))
    if i % 20 == 0:
        print("Loss at step {:03d}: {:}".format(i, loss(model, x, y)))

有点修改(只是名称上的一些变化)

你可以放lambda:同时传递loss

entreno = optim.minimize(lambda:loss, [a])

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM