简体   繁体   English

如何在 VGG16 中获取类激活图?

[英]How to get class activation map in VGG16?

I have created my model graph by using the VGG16 from the keras.applications package and adding a dense and average pooling layer on it using sequential modeling.我使用 keras.applications 包中的 VGG16 创建了我的模型图,并使用顺序建模在其上添加了密集和平均池化层。 I am not sure how i can access the class activation maps out of this composite model.我不确定如何从这个复合模型中访问类激活映射。 Here is my model definition.这是我的模型定义。

def VGGCAM(nb_classes):

    input_tensor = Input(shape=(224,224,3))
    model_vgg16_conv =VGG16(weights='imagenet',include_top=False,input_tensor=input_tensor)
    model_vgg16_conv.summary()
    my_model = Sequential()
    inp = model_vgg16_conv.output_shape[1:]
    my_model.add(Convolution2D(77, 7,7, activation='relu',border_mode="same",input_shape=inp))
    my_model.add(AveragePooling2D((5, 5)))
    my_model.add(Flatten())
    my_model.add(Dense(nb_classes, activation='softmax'))
    my_model.compile(optimizer="sgd", loss='categorical_crossentropy')
    my_model = Model(input=[model_vgg16_conv.input], output=[my_model(model_vgg16_conv.output)])
    my_model.summary()
    return my_model

My final model summary is我的最终模型摘要是


Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
sequential_1 (Sequential)    (None, 10)                1932633   

You can visualize Class Activation Map (CAM) usign Keras.您可以使用 Keras 可视化类激活图 (CAM)。 But, the network should have Global average pooling layer in-order to get CAM.但是,网络应该具有全局平均池化层以获得 CAM。 Please follow the example below:请按照以下示例进行操作:

https://jacobgil.github.io/deeplearning/class-activation-maps https://jacobgil.github.io/deeplearning/class-activation-maps

This is a method with eager execution, works nicely in tensorflow 2, it worked faster and with less bugs compared to eager execution disabled.这是一种具有急切执行的方法,在 tensorflow 2 中运行良好,与禁用急切执行相比,它运行速度更快,错误更少。

 gmodel= VGG16()

 input_tensor_shape = gmodel.layers[0].input.shape
 image_shape = (input_tensor_shape[1],input_tensor_shape[2])

for layer in gmodel.layers[::-1]:
    if isinstance(layer,tensorflow.keras.layers.Conv2D):
        convolution_shape = layer.output[-1]
        convolution_name = layer.name
        break

  heatmap_model = Model(
  [gmodel.inputs], [gmodel.get_layer(convolution_name).output, gmodel.output])

 img = cv2.imread(str(gimg_path))

 if img.shape[2] ==1:
     img = np.dstack([img, img, img])
 img = cv2.resize(img, image_shape)
 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
 img_norm = img.astype(np.float32)/255.0    
 img_norm = np.expand_dims(img_norm,axis=0)

last_conv = gmodel.get_layer(str(convolution_name))



with tensorflow.GradientTape() as tape:
     inputs = tensorflow.cast(img_norm, tensorflow.float32)
     (conv_output, predictions) = heatmap_model(inputs)
loss = predictions[:, np.argmax(predictions[0])]

grads = tape.gradient(loss, conv_output)

castConvOutputs = tensorflow.cast(conv_output > 0, "float32")
castGrads = tensorflow.cast(grads > 0, "float32")
guidedGrads = castConvOutputs * castGrads * grads

convOutputs = conv_output[0]
guidedGrads = guidedGrads[0]

weights = tensorflow.reduce_mean(guidedGrads, axis=(0, 1))
cam = tensorflow.reduce_sum(tensorflow.multiply(weights, convOutputs), axis=-1)

 (w, h) = (img_norm.shape[2], img_norm.shape[1])
 heatmap = cv2.resize(cam.numpy(), (w, h))
 heatmap0 = heatmap

 numer = heatmap0 - np.min(heatmap0)
 denom = (heatmap0.max() - heatmap0.min()) + 1e-100
 heatmap0 = numer / denom
 heatmap0 = (heatmap0 * 255).astype("uint8")

 heatmap1 = cv2.applyColorMap(heatmap0, cv2.COLORMAP_COOL)
 output = cv2.addWeighted(img, 0.5, heatmap1, 0.5, 0)

Credits: Based on code by Adrian Rosebrock致谢:基于 Adrian Rosebrock 的代码

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM