简体   繁体   English

Swin Transformer 注意力图可视化

[英]Swin Transformer attention maps visualization

I am using a Swin Transformer for a hierarchical problem of multi calss multi label classification.我正在使用 Swin Transformer 解决多类多 label 分类的分层问题。 I would like to visualize the self attention maps on my input image trying to extract them from the model, unfortunately I am not succeeding in this task.我想在我的输入图像上可视化自我注意图,试图从 model 中提取它们,不幸的是我没有成功完成这项任务。 Could you give me a hint on how to do it?你能给我一个提示吗? I share you the part of the code in which I am trying to do this task.我与您分享我尝试执行此任务的代码部分。

attention_maps = []
for module in model.modules():
    #print(module)
    if hasattr(module,'attention_patches'):  #controlla se la variabile ha l' attributo
        print(module.attention_patches.shape)
        if module.attention_patches.numel() == 224*224:
            attention_maps.append(module.attention_patches)
for attention_map in attention_maps:
    attention_map = attention_map.reshape(224, 224, 1)
    plt.imshow(sample['image'].permute(1, 2, 0), interpolation='nearest')
    plt.imshow(attention_map, alpha=0.7, cmap=plt.cm.Greys)
    plt.show()
``

In addition if you know about some explainability techniques, like Grad-CAM, which could be used with a hierarchical Swin Transformer, feel free to attach a link, it would be very helpful for me.  

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM