简体   繁体   English

Keras输入层和Tensorflow占位符之间的差异

[英]Difference Between Keras Input Layer and Tensorflow Placeholders

I was hoping someone could explain the difference (if any) between the Input Layer in Keras and Placeholders within Tensorflow? 我希望有人可以解释Keras中的输入层和Tensorflow中的占位符之间的差异(如果有的话)?

The more I investigate, the more the two appear similar, but I am not convinced 100% either way thus far. 我调查的越多,两者看起来越相似,但到目前为止,我不相信100%。

Here is what I have observed in favor of the claim that Input Layers and tf Placeholders are the same: 以下是我观察到的支持输入图层和tf占位符相同的说法:

1) The tensor returned from keras.Input() can be used like a placeholder in the feed_dict of tf.Session's run method. 1)从keras.Input()返回的张量可以像tf.Session的run方法的feed_dict中的占位符一样使用。 Here is part of a simple example using Keras, which adds two tensors (a and b) and concatenates the result with a third tensor (c): 下面是使用Keras的简单示例的一部分,它添加了两个张量(a和b)并将结果与​​第三个张量(c)连接起来:

model = create_graph()

con_cat = model.output[0]
ab_add = model.output[1]

# These values are used equivalently to tf.Placeholder() below
mdl_in_a = model.input[0] 
mdl_in_b = model.input[1]
mdl_in_c = model.input[2]

sess = k.backend.get_session()


a_in = rand_array() # 2x2 numpy arrays
b_in = rand_array()
c_in = rand_array()
a_in = np.reshape( a_in, (1,2,2))
b_in = np.reshape( b_in, (1,2,2))
c_in = np.reshape( c_in, (1,2,2))

val_cat, val_add = sess.run([con_cat, ab_add], 
               feed_dict={  mdl_in_a: a_in, mdl_in_b: b_in, mdl_in_c: c_in})

2) The docs from the Tensorflow Contrib regarding the Keras Input Layer mention Placeholders in its argument description: 2)来自Tensorflow Contrib的关于Keras 输入层的文档在其参数描述中提到了占位符:

"sparse: A boolean specifying whether the placeholder to be created is sparse" “sparse:一个布尔值,指定要创建的占位符是否稀疏”

Here is what I have observed in favor of the claim that Input Layers and tf Placeholders are NOT the same: 以下是我观察到的支持输入图层和tf占位符不同的说法:

1) I have seen people utilize tf.Placeholder's instead of the Input Layer's returned Tensor. 1)我见过人们使用tf.Placeholder而不是输入层返回的Tensor。 Something like: 就像是:

a_holder = tf.placeholder(tf.float32, shape=(None, 2,2))
b_holder = tf.placeholder(tf.float32, shape=(None, 2,2))
c_holder = tf.placeholder(tf.float32, shape=(None, 2,2))

model = create_graph()

con_cat, ab_add = model( [a_holder, b_holder, c_holder])


sess = k.backend.get_session()


a_in = rand_array() # 2x2 numpy arrays
b_in = rand_array()
c_in = rand_array()
a_in = np.reshape( a_in, (1,2,2))
b_in = np.reshape( b_in, (1,2,2))
c_in = np.reshape( c_in, (1,2,2))


val_cat, val_add = sess.run([con_cat, ab_add], 
               feed_dict={  a_holder: a_in, b_holder: b_in, c_holder: c_in})

Input() returns a handle to created placeholder, and does not create other tf operators; Input()返回创建占位符的句柄,不创建其他tf运算符; Tensor stands for both output of operations and placeholders so there is no contradiction. Tensor代表操作输出和占位符,因此没有矛盾。

To analyse what exactly is created by Input() lets run following code: 要分析Input()创建的确切内容,请运行以下代码:

with tf.name_scope("INPUT_LAYER"):
input_l = Input(shape = [n_features])

Then: 然后:

writer = tf.summary.FileWriter('./my_graph', tf.get_default_graph())
writer.close()

And launch Tensorboard from your console: 并从您的控制台启动Tensorboard:

tensorboard --logdir="./my_graph"

Look at the results: 看看结果: 在此输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM