繁体   English   中英

U-net:如何提高多类分割的准确性?

[英]U-net: how to improve accuracy of multiclass segmentation?

我已经使用 U-nets 一段时间了,并注意到在我的大多数应用程序中,它会产生对特定类的高估。

例如,这是一张灰度图像:

在此处输入图片说明

以及 3 个类别的手动分割(病变 [绿色]、组织 [洋红色]、背景 [所有其他]):

在此处输入图片说明

我注意到的预测问题(边界估计过高):

在此处输入图片说明

使用的典型架构如下所示:

def get_unet(dim=128, dropout=0.5, n_classes=3):

 inputs = Input((dim, dim, 1))
 conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
 conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
 pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

 conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
 conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
 pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

 conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
 conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
 pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

 conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
 conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
 conv4 = Dropout(dropout)(conv4)
 pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)

 conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
 conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
 conv5 = Dropout(dropout)(conv5)

 up6 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4], axis=3)
 conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up6)
 conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)

 up7 = concatenate([UpSampling2D(size=(2, 2))(conv6), conv3], axis=3)
 conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up7)
 conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)

 up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2], axis=3)
 conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up8)
 conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)

 up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1], axis=3)
 conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up9)
 conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)

 conv10 = Conv2D(n_classes, (1, 1), activation='relu', padding='same', ker nel_initializer='he_normal')(conv9)
 conv10 = Reshape((dim * dim, n_classes))(conv10)

 output = Activation('softmax')(conv10)

 model = Model(inputs=[inputs], outputs=[output])

 return model

加:

mgpu_model.compile(optimizer='adadelta', loss='categorical_crossentropy',
                   metrics=['accuracy'], sample_weight_mode='temporal')  

open(p, 'w').write(json_string)

model_checkpoint = callbacks.ModelCheckpoint(f, save_best_only=True)
reduce_lr_cback = callbacks.ReduceLROnPlateau(
    monitor='val_loss', factor=0.2,
    patience=5, verbose=1,
    min_lr=0.05 * 0.0001)

h = mgpu_model.fit(train_gray, train_masks,
                   batch_size=64, epochs=50,
                   verbose=1, shuffle=True, validation_split=0.2, sample_weight=sample_weights,
                   callbacks=[model_checkpoint, reduce_lr_cback])

我的问题:您对如何更改架构或超参数以减轻高估有任何见解或建议吗? 这甚至可能包括使用可能更擅长更精确分割的不同架构。 (请注意,我已经进行了班级平衡/加权以补偿班级频率的不平衡)

您可以尝试使用各种损失函数而不是交叉熵。 对于多类分割,您可以尝试:

2018 年小子的获胜者使用了自动编码器正则化( https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization )。 你也可以试试这个。 那篇论文的想法是,模型也在学习如何更好地对潜在空间中的特征进行编码,这有助于模型以某种方式进行分割。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM