[英]ValueError: features should be a dictionary of `Tensor`s. Given type: <class 'tensorflow.python.framework.ops.Tensor'>
[英]Tensorflow Batchnormalization - TypeError: axis must be int or list, type given: <class 'tensorflow.python.framework.ops.Tensor'>
以下是我訓練 U-Net 的代碼。 它主要是帶有我自己的損失函數和指標的普通 Keras 代碼,這對錯誤來說並不重要。 為了避免過度擬合,我嘗試在每個卷積層之后添加一個 BatchNormalization 層,但是,我不斷收到一個非常奇怪的錯誤。
inputs = tf.keras.layers.Input((self.height, self.width, self.channel))
c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(inputs)
c1 = tf.keras.layers.BatchNormalization(c1)
c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(c1)
c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)
....
u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis=3)
c9 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(u9)
c9 = tf.keras.layers.LeakyReLU(self.alpha)(c9)
c9 = tf.keras.layers.Dropout(self.dropout_rate)(c9)
c9 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(c9)
c9 = tf.keras.layers.LeakyReLU(self.alpha)(c9)
c9 = tf.keras.layers.Dropout(self.dropout_rate)(c9)
outputs = tf.keras.layers.Conv2D(self.num_classes, (1, 1), activation='softmax')(c9)
self.model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
self.model.compile(optimizer=tf.keras.optimizers.Adam(lr=self.learning_rate),
loss=cce_iou_coef,
metrics=[iou_coef, dice_coef])
每當我嘗試添加 BatchNormalization 層時,都會出現以下錯誤。 我找不到問題,我做錯了什么?
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-5c6c9c85bbcc> in <module>
----> 1 unet_dev = UNetDev()
2 unet_dev.summary()
~/Desktop/notebook/bachelor-thesis/code/bachelorthesis/unet_dev.py in __init__(self, weight_url, width, height, channel, learning_rate, num_classes, alpha, dropout_rate)
29 inputs = tf.keras.layers.Input((self.height, self.width, self.channel))
30 c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(inputs)
---> 31 c1 = tf.keras.layers.BatchNormalization(c1)
32 c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
33 c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
~/anaconda3/envs/code/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/normalization.py in __init__(self, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, renorm, renorm_clipping, renorm_momentum, fused, trainable, virtual_batch_size, adjustment, name, **kwargs)
167 else:
168 raise TypeError('axis must be int or list, type given: %s'
--> 169 % type(axis))
170 self.momentum = momentum
171 self.epsilon = epsilon
TypeError: axis must be int or list, type given: <class 'tensorflow.python.framework.ops.Tensor'>
只需更換
c1 = tf.keras.layers.BatchNormalization(c1)
經過
c1 = tf.keras.layers.BatchNormalization()(c1)
就像 keras 中的其他層一樣,這是調用它們的方式。 您正在向 keras 層提供參數,如文檔中所示。 你不需要它
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.