简体   繁体   English

张量流,训练后分裂自动编码器

[英]tensorflow, splitting autoencoder after training

I have autoencoder model in tensorflow 1x (not a keras) I am trying to split the model to encoder and decoder after training. 我在tensorflow 1x中有自动编码器模型(不是keras),我正在尝试在训练后将模型分为编码器和解码器。

both function in same scope and I have 3 PlaceHolders 两者都在同一范围内起作用,我有3个PlaceHolders

self.X = tf.placeholder(shape=[None, vox_res64, vox_res64, vox_res64, 1], dtype=tf.float32)
self.Z = tf.placeholder(shape=[None,500], dtype=tf.float32)

self.Y = tf.placeholder(shape=[None, vox_rex256, vox_rex256, vox_rex256, 1], dtype=tf.float32)

 with tf.variable_scope('aeu'):
            self.lfc=self.encoder(self.X)

            self.Y_pred, self.Y_pred_modi = self.decoder(self.lfc)

the enocder and decoder as follow 编码器和解码器如下

    def encoder(self,X):
        with tf.device('/gpu:'+GPU0):
            X = tf.reshape(X,[-1, vox_res64,vox_res64,vox_res64,1])
            c_e = [1,64,128,256,512]
            s_e = [0,1 , 1, 1, 1]
            layers_e = []
            layers_e.append(X)
            for i in range(1,5,1):
                layer = tools.Ops.conv3d(layers_e[-1],k=4,out_c=c_e[i],str=s_e[i],name='e'+str(i))
                layer = tools.Ops.maxpool3d(tools.Ops.xxlu(layer, label='lrelu'), k=2,s=2,pad='SAME')
                layers_e.append(layer)

            ### fc
            [_, d1, d2, d3, cc] = layers_e[-1].get_shape()
            d1=int(d1); d2=int(d2); d3=int(d3); cc=int(cc)
            lfc = tf.reshape(layers_e[-1],[-1, int(d1)*int(d2)*int(d3)*int(cc)])
            lfc = tools.Ops.xxlu(tools.Ops.fc(lfc, out_d=500,name='fc1'), label='relu')
            print (d1)
            print(cc)
        return lfc


    def decoder(self,Z):
        with tf.device('/gpu:'+GPU0):


            lfc = tools.Ops.xxlu(tools.Ops.fc(Z, out_d=2*2*2*512, name='fc2'), label='relu')

            lfc = tf.reshape(lfc, [-1,2,2,2,512])

            c_d = [0,256,128,64]
            s_d = [0,2,2,2]
            layers_d = []
            layers_d.append(lfc)
            for j in range(1,4,1):

                layer = tools.Ops.deconv3d(layers_d[-1],k=4,out_c=c_d[j],str=s_d[j],name='d'+str(len(layers_d)))

                layer = tools.Ops.xxlu(layer, label='relu')
                layers_d.append(layer)
            ###
            layer = tools.Ops.deconv3d(layers_d[-1],k=4,out_c=1,str=2,name='dlast')
            print("****************************",layer)
            ###
            Y_sig = tf.nn.sigmoid(layer)
            Y_sig_modi = tf.maximum(Y_sig,0.01)

        return Y_sig, Y_sig_modi

when I try to use model after training 当我在训练后尝试使用模型时


 X = tf.get_default_graph().get_tensor_by_name("Placeholder:0")
 Z = tf.get_default_graph().get_tensor_by_name("Placeholder_1:0")
 Y_pred = tf.get_default_graph().get_tensor_by_name("aeu/Sigmoid:0")
 lfc = tf.get_default_graph().get_tensor_by_name("aeu/Relu:0")


fetching latent code work fine 提取潜在代码工作正常

 lc = sess.run(lfc, feed_dict={X: x_sample})

now I want to use the latent code as input to decoder I get error I have to fill X(PLACEHOLDER) 现在我想将潜在代码用作解码器的输入我遇到错误我必须填写X(PLACEHOLDER)

 y_pred = sess.run(Y_pred, feed_dict={Z: lc})

how I can split to encoder decoder? 我如何拆分为编码器解码器? I searched only I found keras examples 我只搜索了我发现的keras示例

The first thing that I notice is that you haven't passed in self.Z anywhere into the decoder. 我注意到的第一件事是您没有将self.Z传递到解码器的任何位置。 So tensorflow can't automatically just link that placeholder with the z that you previously used. 因此,tensorflow不能仅将占位符与您先前使用的z自动链接起来。

There's a couple of things you can do to fix this. 您可以采取几种措施来解决此问题。 The easiest is to attempt to recreate the decoder graph but when you call variable scope, set reuse=True. 最简单的方法是尝试重新创建解码器图,但是当您调用变量作用域时,请设置reuse = True。


    with tf.variable_scope('aeu',reuse=True):
        self.new_Y, self.new_Y_modi = self.decoder(self.Z)

    y_pred = sess.run(self.new_Y, feed_dict={self.Z: lc})

This is the method that is probably easiest to do. 这可能是最简单的方法。 You may be asked to fill in placeholder X in this case as well, but you can just fill that in with an empty array. 在这种情况下,也可能会要求您填充占位符X,但是您可以只用一个空数组来填充它。 Normally Tensorflow won't ask for it unless there's some sort of control dependency that ties the two together. 通常,除非存在某种将两者联系在一起的控件依赖项,否则Tensorflow不会要求它。

I found how to split the model . 我发现了如何分割模型。

I will put the answer if anybody want to know 如果有人想知道我会回答

my mistakes are : 我的错误是:

1: I did not pass self.Z to a decoder 1:我没有将self.Z传递给解码器

2: for the following line 2:对于以下行

y_pred = sess.run(Y_pred, feed_dict={Z: lc})

this line in different file after i TRAINED MY MODEL tensorflow will not know what does [ z ] refer to, so you have to use the same variable you identified your tensor with as the following 在我训练了我的模型tensorflow之后,此文件在不同文件中的这一行将不知道[z]指的是什么,因此您必须使用与确定张量相同的变量,如下所示

 lfc = tf.get_default_graph().get_tensor_by_name("aeu/Relu:0")

I name it [ lfc ] not [ Z ] 我将其命名为[lfc]而不是[Z]

so changing the code to that solve the issue 因此更改代码以解决问题

y_pred = sess.run(Y_pred, feed_dict={lfc: lc})

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM