[英]Tensorflow Batchnormalization - TypeError: axis must be int or list, type given: <class 'tensorflow.python.framework.ops.Tensor'>
Following is my code to train an U-Net.以下是我训练 U-Net 的代码。 It is mostly normal Keras code with my own loss functions and metrics, which is not important for the error.它主要是带有我自己的损失函数和指标的普通 Keras 代码,这对错误来说并不重要。 To avoid overfitting I tried to add a BatchNormalization layer after every convolution layer, however, I keep getting a very strange error.为了避免过度拟合,我尝试在每个卷积层之后添加一个 BatchNormalization 层,但是,我不断收到一个非常奇怪的错误。
inputs = tf.keras.layers.Input((self.height, self.width, self.channel))
c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(inputs)
c1 = tf.keras.layers.BatchNormalization(c1)
c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(c1)
c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)
....
u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis=3)
c9 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(u9)
c9 = tf.keras.layers.LeakyReLU(self.alpha)(c9)
c9 = tf.keras.layers.Dropout(self.dropout_rate)(c9)
c9 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(c9)
c9 = tf.keras.layers.LeakyReLU(self.alpha)(c9)
c9 = tf.keras.layers.Dropout(self.dropout_rate)(c9)
outputs = tf.keras.layers.Conv2D(self.num_classes, (1, 1), activation='softmax')(c9)
self.model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
self.model.compile(optimizer=tf.keras.optimizers.Adam(lr=self.learning_rate),
loss=cce_iou_coef,
metrics=[iou_coef, dice_coef])
When ever I try to add the BatchNormalization layer I get the following error.每当我尝试添加 BatchNormalization 层时,都会出现以下错误。 I can not find the problem, what am I doing wrong?我找不到问题,我做错了什么?
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-5c6c9c85bbcc> in <module>
----> 1 unet_dev = UNetDev()
2 unet_dev.summary()
~/Desktop/notebook/bachelor-thesis/code/bachelorthesis/unet_dev.py in __init__(self, weight_url, width, height, channel, learning_rate, num_classes, alpha, dropout_rate)
29 inputs = tf.keras.layers.Input((self.height, self.width, self.channel))
30 c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(inputs)
---> 31 c1 = tf.keras.layers.BatchNormalization(c1)
32 c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
33 c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
~/anaconda3/envs/code/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/normalization.py in __init__(self, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, renorm, renorm_clipping, renorm_momentum, fused, trainable, virtual_batch_size, adjustment, name, **kwargs)
167 else:
168 raise TypeError('axis must be int or list, type given: %s'
--> 169 % type(axis))
170 self.momentum = momentum
171 self.epsilon = epsilon
TypeError: axis must be int or list, type given: <class 'tensorflow.python.framework.ops.Tensor'>
Just replace只需更换
c1 = tf.keras.layers.BatchNormalization(c1)
by经过
c1 = tf.keras.layers.BatchNormalization()(c1)
Like other layers in keras, that the way to call them.就像 keras 中的其他层一样,这是调用它们的方式。 You were providing parameters to the keras layer as see in the doc .您正在向 keras 层提供参数,如文档中所示。 And you didn't need it你不需要它
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.