[英]In TensorFlow 2.0: Training error with optimizer.apply_gradients
I am trying to learn the new TF 2.0 alpha release. 我正在尝试学习新的TF 2.0 alpha版本。 I'm training a
Sequential
model for a binary classification purpose. 我正在训练一个
Sequential
模型用于二进制分类目的。 My datatable is df
, which is a numpy array. 我的数据表是
df
,这是一个numpy数组。 classification
is the one-hot encoding dataframe of the classes I must predict. classification
是我必须预测的类的一热编码数据帧。
The definition of the model is clear, as it is the definition of loss and accuracy functions and the (Adam) optimizer. 模型的定义很清楚,因为它是损失和准确度函数的定义以及(Adam)优化器。 However, I get an error at the point of training:
但是,我在培训时遇到错误:
loss_history = []
accuracy_history = []
for epoch in range(n_epochs):
with tf.GradientTape() as tape:
# compute binary crossentropy loss (bce_loss)
current_loss = bce_loss(model(df), classification.astype(np.float64))
loss_history.append(current_loss)
# train the model based on the gradient of loss function
gradients = tape.gradient(current_loss, model.trainable_variables)
optimizer.apply_gradients([gradients, model.trainable_variables]) # optimizer = Adam
# print the training progress
print(str(epoch+1) + '. Train Loss: ' + str(metrics) + ', Accuracy: ' + str(current_accuracy))
print('\nTraining complete.')
At this point, I get error pointed at optimizer.apply_gradients()
. 此时,我得到错误指向
optimizer.apply_gradients()
。 The error message says: 错误消息说:
ValueError: too many values to unpack (expected 2)
ValueError:解压缩的值太多(预期2)
Where is my mistake? 我的错误在哪里?
I did some research on this type of error, but I found nothing useful related to this particular function. 我对这种类型的错误进行了一些研究,但我发现没有任何与这个特定功能相关的有用信息。 Any help is appreciated.
任何帮助表示赞赏。
试试这个:
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.