简体   繁体   中英

tf.constant Vs tf.placeholder

I am going through Andrew Ng's deep learning course and I don't understand the basics purpose of using constants. When place holders can do the trick, why do we need constants? Suppose I need to calculate a function..the same can be performed by taking constants as well as placeholders. I am very confused. Shall be really grateful if anyone can shed some light.

Constants and placeholders are both nodes in the computation graph with zero inputs and one outputs -- that is, they represent constant values.

The difference is when you as the programmer specify those values. With a constant, the value is a part of the computation graph itself , specified when the constant is created: tf.constant(4) , for instance. With a placeholder, every time you run the computation graph, you can feed in a different value in your feed_dict .

In machine learning, placeholders are usually used for nodes that hold data, because we may want to run the same graph again and again, in a loop, with different parts of our dataset. (This would be impossible using constants.) People also use placeholders for parameters that change during training, like the learning rate. (Training generally involves running your computation graph over and over again with different placeholder values.) Constants are used only for things that are actually constant. For those things, we don't want to use placeholders, because we don't want to have to specify them over and over every time we run our graph.

If you're curious, this Jupyter notebook has an in-depth explanation of the computation graph and the role played by placeholders, constants, and variables: https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/01B_TensorFlow_Fundamentals.ipynb

As their names indicate, a placeholder does not have any fixed value but just 'holds place for a tensor' which is needed in a computation graph. Whereas constant is something (which also holds a tensor) which holds a fixed value. A constant does not change its value during its lifetime (not just a session ). Once defined (during programming), it's fixed at that. A placeholder on the other hand, does not indicate any value during graph definition (programming), but gets its value fed in at the time of session run start. In fact, all the placeholders should get their value in such manner.

session.run(a_variable, feed_dict={a_placeholder: [1.0, 2.1]})

Now it might come to one's mind that how is a placeholder different than a tf.variable , well a placeholder can't be asked to be evaluated to a session, like a variable can be:

session.run(a_tf_variable)

Typical use of placeholders is for input nodes, where we feed in the values for different inputs (and we don't expect them to be asked to be evaluated). Typical use for constants is holding values like PI or areas of geographical blocks/districts in population study.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM