[英]Keras Model Output is float32 instead of uint8… despite data labels being uint8
I am training a model to predict a segmentation in medical images. 我正在训练模型来预测医学图像中的分割。 In the training data, the input data is of type: numpy.float64 and the ground truth labels are of type: numpy.uint8.
在训练数据中,输入数据的类型为:numpy.float64,地面实况标签的类型为:numpy.uint8。 The problem is for some reason my model is producing an output type of numpy.float32.
问题是由于某种原因我的模型正在生成输出类型numpy.float32。
Image showing: example of data types 图像显示: 数据类型的示例
# Defining the model
segmenter = Model(input_img, segmenter(input_img))
# Training the model (type of train_ground is numpy.uint8)
segmenter_train = segmenter.fit(train_X, train_ground, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_ground))
Model definition: 型号定义:
def segmenter(input_img):
#encoder
#input = 28 x 28 x 1 (wide and thin)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) #28 x 28 x 32
conv1 = BatchNormalization()(conv1)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) #14 x 14 x 32
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) #14 x 14 x 64
conv2 = BatchNormalization()(conv2)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) #7 x 7 x 64
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2) #7 x 7 x 128 (small and thick)
conv3 = BatchNormalization()(conv3)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
conv3 = BatchNormalization()(conv3)
#decoder
conv4 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv3) #7 x 7 x 128
conv4 = BatchNormalization()(conv4)
conv4 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv4)
conv4 = BatchNormalization()(conv4)
up1 = UpSampling2D((2,2))(conv4) # 14 x 14 x 128
conv5 = Conv2D(32, (3, 3), activation='relu', padding='same')(up1) # 14 x 14 x 64
conv5 = BatchNormalization()(conv5)
conv5 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv5)
conv5 = BatchNormalization()(conv5)
up2 = UpSampling2D((2,2))(conv5) # 28 x 28 x 64
conv6 = Conv2D(64, (3, 3), activation='relu', padding='same')(up2) #7 x 7 x 128
conv6 = BatchNormalization()(conv6)
conv6 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv6)
conv6 = BatchNormalization()(conv6)
up3 = UpSampling2D((2,2))(conv6) # 14 x 14 x 128
conv7 = Conv2D(64, (3, 3), activation='relu', padding='same')(up3) #7 x 7 x 128
conv7 = BatchNormalization()(conv7)
conv7 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv7)
conv7 = BatchNormalization()(conv7)
up4 = UpSampling2D((2,2))(conv7) # 14 x 14 x 128
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(up4) # 28 x 28 x 1
return decoded
Thanks in advance for help on this :) 在此先感谢您的帮助:)
The last layer happens to be the sigmoid activation function. 最后一层恰好是sigmoid激活函数。 It returns a real number from 0 and 1, not an integer.
它返回0和1之间的实数,而不是整数。
Furthermore, it's important that the error metric, the difference between the correct answer and the calculated value, is continuous and not discrete, because that's differentiable and allows proper learning of the neural network weights with backpropagation. 此外,重要的是误差度量,即正确答案与计算值之间的差异是连续的而不是离散的,因为这是可微分的并且允许通过反向传播正确学习神经网络权重。
For training the network, just convert the truth labels to floating point values. 要训练网络,只需将真值标签转换为浮点值即可。
Once you've trained the network and want to use its outputs, just round them to convert them to integers - sigmoid activation is well suited for that. 一旦你训练了网络并想要使用它的输出,只需围绕它们将它们转换为整数 - sigmoid激活非常适合。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.