简体   繁体   English

如何获取Keras内置的NN的Tensorflow代码版本?

[英]How to obtain the Tensorflow code version of a NN built in Keras?

I have been working with Keras for a week or so. 我已经和Keras合作了一个星期左右。 I know that Keras can use either TensorFlow or Theano as a backend. 我知道Keras可以使用TensorFlow或Theano作为后端。 In my case, I am using TensorFlow. 就我而言,我正在使用TensorFlow。

So I'm wondering: is there a way to write a NN in Keras, and then print out the equivalent version in TensorFlow? 所以我想知道:是否有办法在Keras中编写NN,然后在TensorFlow中打印出等效版本?

MVE MVE

For instance suppose I write 例如假设我写

    #create seq model
    model = Sequential()
    # add layers
    model.add(Dense(100, input_dim = (10,), activation = 'relu'))
    model.add(Dense(1, activation = 'linear'))
    # compile model
    model.compile(optimizer = 'adam', loss = 'mse')
    # fit
    model.fit(Xtrain, ytrain, epochs = 100, batch_size = 32)
    # predict
    ypred = model.predict(Xtest, batch_size = 32)
    # evaluate
    result = model.evaluate(Xtest)

This code might be wrong, since I just started, but I think you get the idea. 自从我刚开始以来,这段代码可能是错误的,但是我认为您明白了。

What I want to do is write down this code, run it (or not even, maybe!) and then have a function or something that will produce the TensorFlow code that Keras has written to do all these calculations. 我想要做的就是写下这段代码,运行它(或者什至不行!),然后使用一个函数或某种东西来生成Keras编写的TensorFlow代码,以进行所有这些计算。

First, let's clarify some of the language in the question. 首先,让我们澄清问题中的某些语言。 TensorFlow (and Theano) use computational graphs to perform tensor computations. TensorFlow(和Theano)使用计算图来执行张量计算。 So, when you ask if there is a way to "print out the equivalent version" in Tensorflow, or "produce TensorFlow code," what you're really asking is, how do you export a TensorFlow graph from a Keras model? 因此,当您询问是否有办法在Tensorflow中“打印出等效版本”或“产生TensorFlow代码”时,您真正要问的是, 如何从Keras模型中导出TensorFlow图?

As the Keras author states in this thread , 正如Keras作者在此主题中指出的那样,

When you are using the TensorFlow backend, your Keras code is actually building a TF graph. 当您使用TensorFlow后端时,您的Keras代码实际上是在构建TF图。 You can just grab this graph. 您可以获取此图。

Keras only uses one graph and one session. Keras仅使用一张图表和一个会话。

However, he links to a tutorial whose details are now outdated . 但是,他链接到详细信息已过时的教程 But the basic concept has not changed. 但是基本概念没有改变。

We just need to: 我们只需要:

  • Get the TensorFlow session 进行TensorFlow会话
  • Export the computation graph from the TensorFlow session 从TensorFlow会话中导出计算图

Do it with Keras 用Keras做

The keras_to_tensorflow repository contains a short example of how to export a model from Keras for use in TensorFlow in an iPython notebook. keras_to_tensorflow存储库包含一个简短示例,说明如何从Keras导出模型以在iPython笔记本的TensorFlow中使用。 This is basically using TensorFlow. 这基本上是使用TensorFlow。 It isn't a clearly-written example, but throwing it out there as a resource. 这不是一个写得很清楚的示例,但会将其作为资源丢弃。

Do it with TensorFlow 用TensorFlow来做

It turns out we can actually get the TensorFlow session that Keras is using from TensorFlow itself , using the tf.contrib.keras.backend.get_session() function. 事实证明,我们实际上可以使用tf.contrib.keras.backend.get_session()函数从TensorFlow本身获取Keras 使用的TensorFlow会话 It's pretty simple to do - just import and call. 这非常简单-只需导入和调用即可。 This returns the TensorFlow session. 这将返回TensorFlow会话。

Once you have the TensorFlow session variable, you can use the SavedModelBuilder to save your computational graph ( guide + example to using SavedModelBuilder in the TensorFlow docs ). 一旦有了TensorFlow会话变量,就可以使用SavedModelBuilder来保存您的计算图( TensorFlow文档中的使用SavedModelBuilder的指南和示例 )。 If you're wondering how the SavedModelBuilder works and what it actually gives you, the SavedModelBuilder Readme in the Github repo is a good guide. 如果您想知道SavedModelBuilder的工作原理以及它实际上为您提供了什么, Github存储库中SavedModelBuilder自述文件就是一个很好的指南。

PS - If you are planning on heavy usage of TensorFlow + Keras in combination, have a look at the other modules available in tf.contrib.keras PS-如果您打算大量使用TensorFlow + Keras,请查看tf.contrib.keras中的其他模块

So you want to use instead of WX+ba different function for your neurons. 因此,您想为神经元使用WX + ba不同的功能。 Well in tensorflow you explicitly calculate this product, so for example you do 在tensorflow中,您明确计算了这个乘积,例如

y_ = tf.matmul(X, W)

you simply have to write your formula and let the network learn. 您只需编写您的公式并让网络学习。 It should not be difficult to implement. 实施起来应该不难。

In addition what you are trying to do (according to the paper you link) is called batch normalization and is relatively standard. 另外,您尝试做的事情(根据您链接的论文)被称为批归一化,并且是相对标准的。 The idea being you normalize your intermediate steps (in the different layers). 想法是您标准化中间步骤(在不同层中)。 Check for example https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFggyMAE&url=https%3A%2F%2Farxiv.org%2Fabs%2F1502.03167&usg=AOvVaw1nGzrGnhPhNGEczNwcn6WK or https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFghCMAM&url=https%3A%2F%2Fbcourses.berkeley.edu%2Ffiles%2F66022277%2Fdownload%3Fdownload_frd%3D1%26verifier%3DoaU8pqXDDwZ1zidoDBTgLzR8CPSkWe6MCBKUYan7&usg=AOvVaw0AHLwD_0pUr1BSsiiRoIFc 检查例如https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFggyMAE&url=https%3A%2F%2Farxiv.org%2Fabs%2F1502.03167&usg=AOvVaw1nGzrGnhPhNGEczNwcn6WKhttps://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFghCMAM&url=https%3A%2F%2Fbcourses.berkeley.edu%2Ffiles%2下载%3D1%26verifier%3DoaU8pqXDDwZ1zidoDBTgLzR8CPSkWe6MCBKUYan7&USG = AOvVaw0AHLwD_0pUr1BSsiiRoIFc

Hope that helps, Umberto 希望能有所帮助,翁贝托

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM