[英]Output tensors to a Model must be the output of a TensorFlow `Layer` (thus holding past layer metadata) in Model Api Tensorfow
def generator_model(self):
input_images = Input(shape=[64,64,1])
layer1= Conv2D(self.filter_size,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(input_images)
layer1=LeakyReLU(0.2)(layer1)
layer2= Conv2D(self.filter_size*2,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer1)
layer2=BatchNormalization()(layer2)
layer2=LeakyReLU(0.2)(layer2)
layer3=Conv2D(self.filter_size*4,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer2)
layer3=BatchNormalization()(layer3)
layer3=LeakyReLU(0.2)(layer3)
layer4=Conv2D(self.filter_size*8,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer3)
layer4=BatchNormalization()(layer4)
layer4=LeakyReLU(0.2)(layer4)
layer5=Conv2D(self.filter_size*16,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer4)
layer5=BatchNormalization()(layer5)
layer5=LeakyReLU(0.2)(layer5)
up_layer5 = Conv2DTranspose(self.filter_size*8,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(layer5)
up_layer5=BatchNormalization()(up_layer5)
up_layer5=LeakyReLU(0.2)(up_layer5)
#shape = 4*4*512
up_layer5_concat = tf.concat([up_layer5,layer4],0)
up_layer6 = Conv2DTranspose(self.filter_size*4,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(up_layer5_concat)
up_layer6 =BatchNormalization()(up_layer6)
up_layer6 =LeakyReLU(0.2)(up_layer6)
up_layer_6_concat = tf.concat([up_layer6,layer3],0)
up_layer7 = Conv2DTranspose(self.filter_size*2,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(up_layer_6_concat)
up_layer7 =BatchNormalization()(up_layer7)
up_layer7 =LeakyReLU(0.2)(up_layer7)
up_layer_7_concat = tf.concat([up_layer7,layer2],0)
up_layer8 = Conv2DTranspose(self.filter_size,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(up_layer_7_concat)
up_layer8 =BatchNormalization()(up_layer8)
up_layer8 =LeakyReLU(0.2)(up_layer8)
up_layer_8_concat = tf.concat([up_layer8,layer1],0)
output = Conv2D(3,self.kernel_size,strides = (1,1),padding='same',use_bias=False)(up_layer_8_concat)
final_output = LeakyReLU(0.2)(output)
model = Model(input_images,output)
model.summary()
return model
This is how my generator_model looks like, and I have followed a research paper to make the architecture. 这就是我的generator_model的样子,并且我遵循了一份研究论文来构建体系结构。 But, I am in problem with the error.
但是,我对错误有疑问。 I have checked the other solutions to given problem here in SO, but none of them worked for me as they are little bit different maybe.
我已经在SO中检查了针对给定问题的其他解决方案,但是它们对我没有用,因为它们可能有点不同。 My guess, the problem is there with the
tf.concat()
function which should be put as tensorflow keras layer of Lambda, but I tried that too and of no help. 我的猜测是,问题应该存在于
tf.concat()
函数中,该函数应作为Lambda的tensorflow keras层放置,但是我也尝试了这一点,但没有帮助。 Any help regarding this issue? 关于这个问题有帮助吗? Bugging me for 2 days now.
烦我两天了。
When you define a model using the Keras functional API, you must use the Keras Layers to build your model. 使用Keras功能API定义模型时, 必须使用Keras图层来构建模型。
Therefore you are right, the problem is in your tf.concat
invocation. 因此,您是对的,问题出在您的
tf.concat
调用中。
In the tf.keras.layers
package, however, you can find the Concatenate
layer, that uses the functional API too. 但是,在
tf.keras.layers
包中,您可以找到Concatenate
层,该层也使用功能性API。
Thus, you can replace your concat layers from: 因此,您可以从以下位置替换concat层:
up_layer5_concat = tf.concat([up_layer5,layer4],0)
to 至
up_layer5_concat = tf.keras.layers.Concatenate()([up_layer5, layer4])
And so on for every other tf.concat
invocation in your network 以此类推,对于网络中的所有其他
tf.concat
调用
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.