简体   繁体   中英

SegNet for CT images pretrained weights

I'm trying to train a SegNet for segmentation task on ct images (with Keras TF). I'm using VGG16 pretrained weights but I had a problem with the first convolutional layer because I'm using grayscale images but VGG was trained on rgb ones. I solved that using second method of this (can't use first method because requires too much memory). However it didn't help me, values are really bad (trained for 100 epochs).

Should I train the first convolutional layer from scratch?

You can try to add a Conv2D before the vgg. Something like:

> Your Input(shape=(height,width,1))

Conv2D(filters=3,kernel_size=1, padding='same',activation='relu')

> The VGG pretrained network (input = (height,width,3))

is interesting in your case because 1x1 convolution is usually employed to change the depth of your object.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM