简体   繁体   English

使用 tf.import_graph_def 附加新输入管道时如何避免图形重复?

[英]How to avoid graph duplication when using tf.import_graph_def to append a new input pipeline?

I am trying to have two different pipelines for a model I am doing in tensorflow.我正在尝试为我在 tensorflow 中所做的模型使用两个不同的管道。 To achieve this, I have taken answers from here and here , but each time I run it and save the graph to display it in tensorboard, or print all the nodes available in the graph, it shows that the original model has been duplicated instead of appending a new input to the corresponding node.为了实现这一点,我从这里这里获取了答案,但是每次运行它并保存图形以将其显示在张量板中,或打印图形中可用的所有节点时,它表明原始模型已被复制而不是将新输入附加到相应的节点。

Here is a minimal example:这是一个最小的例子:

import tensorflow as tf

# Creates toy dataset with tf.data API
dataset = tf.data.Dataset.from_tensor_slices(tf.random_uniform([4, 10]))
dataset = dataset.batch(32)

# Input placeholder
x = tf.placeholder(tf.float32,shape=[None,10],name='x')

# Main model
with tf.variable_scope('model'):
    y = tf.add(tf.constant(2.),x,name='y')
    z = tf.add(tf.constant(2.),y,name='z')

# Session
sess = tf.Session()

# Iterator that will be the new input pipeline for training
iterator = dataset.make_initializable_iterator()
next_elem = iterator.get_next()

graph_def = tf.get_default_graph().as_graph_def()

# If uncommented, it creates an error
#tf.reset_default_graph()

# Create the input to the node y
x_ds = tf.import_graph_def(graph_def=graph_def,
    input_map={'x:0':next_elem})

# Write to disk the graph
tf.summary.FileWriter('./',sess.graph)

# Print all the nodes names
for node in sess.graph_def.node:
    print(node.name)

I would expect only one y and z node.我希望只有一个 y 和 z 节点。 However, when displaying all the names of the graph or checking it with tensorboard there are two structures, the original, and other within the 'import' namespace with the dataset input to y.但是,当显示图形的所有名称或使用 tensorboard 检查它时,在“import”命名空间中有两个结构,原始结构和其他结构,数据集输入到 y。 Any idea how to solve this?知道如何解决这个问题吗? Or is this the expected behaviour?或者这是预期的行为?

After reading some other questions I found the answer to my problem.在阅读了其他一些问题后,我找到了我的问题的答案。 Here is a fantastic explanation on how to join nodes from different graphs. 是关于如何从不同图形连接节点的精彩解释。

The key here is to manually define the graph where each op will be created.这里的关键是手动定义将创建每个操作的图形。 Take the next code has an example.拿下一个代码有一个例子。

import numpy as np
import tensorflow as tf

### Main model with a placeholder as input

# Create a graph
g_1 = tf.Graph()

# Define everything inside it
with g_1.as_default():
    # Input placeholder
    x = tf.placeholder(tf.float64,shape=[None,2],name='x')
    with tf.variable_scope('model'):
        y = tf.add(tf.constant(2.,dtype=tf.float64),x,name='y')
        z = tf.add(tf.constant(2.,dtype=tf.float64),y,name='z')

gdef_1 = g_1.as_graph_def()


### Change the input pipeline

# Create another graph
g_2 = tf.Graph()

# Define everything inside it
with g_2.as_default():
    # Create a toy tf.dataset 
    dataset = tf.data.Dataset.from_tensor_slices(np.array([[1.,2],[3,4],[5,6]]))
    dataset = dataset.batch(1)

    # Iterator that will be the new input pipeline for training
    iterator = dataset.make_initializable_iterator()
    next_elem = iterator.get_next()
    # Create an identical operation as next_elemebt with name so it can be
    # manipulated later
    next_elem = tf.identity(next_elem,name='next_elem') 

    # Create the new pipeline. Use next_elem as input instead of x
    z, = tf.import_graph_def(gdef_1,
        input_map={'x:0':next_elem},
        return_elements=['model/z:0'],
        name='') # Set name to '' so it conserves the same scope as the original

# Create session linked to g_1
sess_1 = tf.Session(graph=g_1)

# Create session linked to g_2
sess_2 = tf.Session(graph=g_2)

# Initialize the iterator
sess_2.run(iterator.initializer)

# Write the graph to disk
tf.summary.FileWriter('./',sess_2.graph)

# Testing placeholders
out = sess_1.run([y],feed_dict={x:np.array([[1.,2.]],dtype=np.float64)})
print(out)

# Testing tf.data
out = sess_2.run([z])
print(out)

Now, everything should be in a different graph.现在,一切都应该在不同的图表中。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 tf.import_graph_def和tf.train.import_meta_graph之间的区别是什么 - What's the diff between tf.import_graph_def and tf.train.import_meta_graph 如何使用tf.contrib.graph_editor重新路由训练输入管道以测试tensorflow中的管道? - How can I reroute the training input pipeline to test pipeline in tensorflow using tf.contrib.graph_editor? 如何在TensorFlow import_graph_def期间更改输入的维度 - How to change dimension of input during TensorFlow import_graph_def 使用输入管道时如何将数据注入图表? - How to inject data into a graph when using an input pipeline? 无法使用 tf.train.import_meta_graph 导入元图,名称“”指的是不在图中的操作 - Unable to import meta graph using tf.train.import_meta_graph, The name '' refers to an Operation not in the graph 如何使用 node_def 在 Tensorflow 中复制图形操作? - How to copy a graph operation in Tensorflow using node_def? 使用功能 API 和 tf.GradientTape() 的组合在 Tensorflow 2.0 中进行训练时,如何将模型图记录到张量板? - How to log model graph to tensorboard when using combination to Functional API and tf.GradientTape() to train in Tensorflow 2.0? 如何获取TensorFlow的'import_graph_def'以返回Tensors - How do I get TensorFlow's 'import_graph_def' to return Tensors Tensorflow,使用import_graph_def()加载模型错误 - Tensorflow, use import_graph_def() to load model error 在 tensorflow 中初始化变量、变量范围和 import_graph_def - Initializing variables, variable scope and import_graph_def in tensorflow
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM