简体   繁体   English

当我使用 tensorflow ops 编写函数时会发生什么

[英]what happens when I write a function using tensorflow ops

I write a function using tensorflow ops.我使用 tensorflow ops 编写了一个函数。 I know the fact when I run the function, it will add many ops to the graph.我知道当我运行该函数时,它会向图中添加许多操作。 But I am confused with how to get access of these ops.但我对如何访问这些操作感到困惑。

for example:例如:

def assign_weights():
    with tf.name_scope('zheng'):
        v = tf.Variable(0, 'v', dtype=tf.float32)
        b = tf.placeholder(tf.float32, shape=())
        z = tf.assign(v, b)
    return z, b

I can use feed_dict to pass a value to b , only if I set b as a return value.我可以用feed_dict传递ab ,只有当我设置b作为返回值。 Otherwise, there is no way to access b .否则,无法访问b If we want to access many ops in the function scope, we should set many return values.如果我们想访问函数范围内的许多操作,我们应该设置许多返回值。 This is very ugly.这是非常丑陋的。

I want to know what happens under the hood when I run functions using tensorflow and how to get access of the ops in the function scope.我想知道当我使用 tensorflow 运行函数时会发生什么,以及如何访问函数作用域中的操作。

Thank you!谢谢!

Obviously, it's true that to access an op (or tensor) we need some reference to it.显然,要访问操作(或张量),我们确实需要对它进行一些引用。 IMHO, one standard workaround is to build your graph in a class and make certain tensors attributes of the class and access them through the object.恕我直言,一种标准的解决方法是在类中构建图形并创建类的某些张量属性并通过对象访问它们。

Alternatively, if you're more inclined to the functional approach, a better way than returning all relevant ops and tensors separately would be to return a dict (or namedtuple).或者,如果您更倾向于函数式方法,那么比分别返回所有相关操作和张量更好的方法是返回 dict(或命名元组)。

Additionally, there are also specialized functions that return ops by name: eg get_operation_by_name .此外,还有一些专门的函数按名称返回操作:例如get_operation_by_name

As an aside to this question, you might also want to try out eager execution , which is imperative.作为这个问题的旁白,您可能还想尝试急切执行,这是必不可少的。

3 things happen when you use op function:使用 op 函数时会发生 3 件事:

  • create and add a compute node to default graph创建计算节点并将其添加到默认图中
  • set your input as the node input tensor将您的输入设置为节点输入张量
  • set node output tensor as return value将节点输出张量设置为返回值

for example, a = tf.add(b, c, name='add') ,例如, a = tf.add(b, c, name='add')

  • add a node with op Add to default graph, with name 'add'添加一个带有 op Add到默认图形的节点,名称为“add”
  • set b and c as node input tensor将 b 和 c 设置为节点输入张量
  • set node output, with name 'add:0', to a将名称为“add:0”的节点输出设置为

So you can access nodes via sess.graph , there are many functions to access nodes, say, get_operation_by_name.所以你可以通过sess.graph访问节点,有很多函数可以访问节点,比如 get_operation_by_name。

Also, you can operate the graph via sess.graph_def , which is serialized graph with protobuf, you can find the protobuf definition in the tensorflow source code, tensorflow/core/framework , some .proto files there.此外,您可以通过sess.graph_def操作该图,它是带有 protobuf 的序列化图,您可以在 tensorflow 源代码中找到 protobuf 定义, tensorflow/core/framework ,一些 .proto 文件在那里。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM