简体   繁体   English

Tensorflow占位符,用于tf.Variable

[英]Tensorflow placeholder for tf.Variable

The code below throws a "no gradients" error. 下面的代码引发“无渐变”错误。

self.x1 = tf.placeholder(tf.float64)
self.x2 = tf.placeholder(tf.float64)
self.x3 = tf.placeholder(tf.float64)

self.cos1_denom = tf.norm(self.x1, axis=0) * tf.norm(self.x2, axis=0)
self.cos1 = tf.matmul(self.x1, self.x2, transpose_b=True) / self.cos1_denom
self.cos2_denom = tf.norm(self.x1, axis=0) * tf.norm(self.x2, axis=0)
self.cos2 = tf.matmul(self.x1, self.x3, transpose_b=True) / self.cos2_denom
self.loss = tf.reduce_mean(self.cos2) - tf.reduce_mean(self.cos1)
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.eta).minimize(self.loss)

I believe it's because my loss function depends on placeholders instead of Variables, but in my training function I pass in a Variable value for the placeholder. 我相信这是因为我的损失函数依赖于占位符而不是变量,但是在我的训练函数中,我为占位符传递了变量值。

Is there a way to create a placeholder for a Variable? 有没有办法为变量创建占位符?

I think the problem is that there are no trainable variables in your model. 我认为问题在于模型中没有可训练的变量。 Back propogation is for trainable parameters to adjust according to the loss. 反向传播是为了使可训练的参数根据损耗进行调整。 Unfortunately there is none here 不幸的是这里没有

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM