[英]How to avoid graph duplication when using tf.import_graph_def to append a new input pipeline?
I am trying to have two different pipelines for a model I am doing in tensorflow.我正在尝试为我在 tensorflow 中所做的模型使用两个不同的管道。 To achieve this, I have taken answers from here and here , but each time I run it and save the graph to display it in tensorboard, or print all the nodes available in the graph, it shows that the original model has been duplicated instead of appending a new input to the corresponding node.为了实现这一点,我从这里和这里获取了答案,但是每次运行它并保存图形以将其显示在张量板中,或打印图形中可用的所有节点时,它表明原始模型已被复制而不是将新输入附加到相应的节点。
Here is a minimal example:这是一个最小的例子:
import tensorflow as tf
# Creates toy dataset with tf.data API
dataset = tf.data.Dataset.from_tensor_slices(tf.random_uniform([4, 10]))
dataset = dataset.batch(32)
# Input placeholder
x = tf.placeholder(tf.float32,shape=[None,10],name='x')
# Main model
with tf.variable_scope('model'):
y = tf.add(tf.constant(2.),x,name='y')
z = tf.add(tf.constant(2.),y,name='z')
# Session
sess = tf.Session()
# Iterator that will be the new input pipeline for training
iterator = dataset.make_initializable_iterator()
next_elem = iterator.get_next()
graph_def = tf.get_default_graph().as_graph_def()
# If uncommented, it creates an error
#tf.reset_default_graph()
# Create the input to the node y
x_ds = tf.import_graph_def(graph_def=graph_def,
input_map={'x:0':next_elem})
# Write to disk the graph
tf.summary.FileWriter('./',sess.graph)
# Print all the nodes names
for node in sess.graph_def.node:
print(node.name)
I would expect only one y and z node.我希望只有一个 y 和 z 节点。 However, when displaying all the names of the graph or checking it with tensorboard there are two structures, the original, and other within the 'import' namespace with the dataset input to y.但是,当显示图形的所有名称或使用 tensorboard 检查它时,在“import”命名空间中有两个结构,原始结构和其他结构,数据集输入到 y。 Any idea how to solve this?知道如何解决这个问题吗? Or is this the expected behaviour?或者这是预期的行为?
After reading some other questions I found the answer to my problem.在阅读了其他一些问题后,我找到了我的问题的答案。 Here is a fantastic explanation on how to join nodes from different graphs. 这是关于如何从不同图形连接节点的精彩解释。
The key here is to manually define the graph where each op will be created.这里的关键是手动定义将创建每个操作的图形。 Take the next code has an example.拿下一个代码有一个例子。
import numpy as np
import tensorflow as tf
### Main model with a placeholder as input
# Create a graph
g_1 = tf.Graph()
# Define everything inside it
with g_1.as_default():
# Input placeholder
x = tf.placeholder(tf.float64,shape=[None,2],name='x')
with tf.variable_scope('model'):
y = tf.add(tf.constant(2.,dtype=tf.float64),x,name='y')
z = tf.add(tf.constant(2.,dtype=tf.float64),y,name='z')
gdef_1 = g_1.as_graph_def()
### Change the input pipeline
# Create another graph
g_2 = tf.Graph()
# Define everything inside it
with g_2.as_default():
# Create a toy tf.dataset
dataset = tf.data.Dataset.from_tensor_slices(np.array([[1.,2],[3,4],[5,6]]))
dataset = dataset.batch(1)
# Iterator that will be the new input pipeline for training
iterator = dataset.make_initializable_iterator()
next_elem = iterator.get_next()
# Create an identical operation as next_elemebt with name so it can be
# manipulated later
next_elem = tf.identity(next_elem,name='next_elem')
# Create the new pipeline. Use next_elem as input instead of x
z, = tf.import_graph_def(gdef_1,
input_map={'x:0':next_elem},
return_elements=['model/z:0'],
name='') # Set name to '' so it conserves the same scope as the original
# Create session linked to g_1
sess_1 = tf.Session(graph=g_1)
# Create session linked to g_2
sess_2 = tf.Session(graph=g_2)
# Initialize the iterator
sess_2.run(iterator.initializer)
# Write the graph to disk
tf.summary.FileWriter('./',sess_2.graph)
# Testing placeholders
out = sess_1.run([y],feed_dict={x:np.array([[1.,2.]],dtype=np.float64)})
print(out)
# Testing tf.data
out = sess_2.run([z])
print(out)
Now, everything should be in a different graph.现在,一切都应该在不同的图表中。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.