MedFocusCLIP: improving few shot classification in medical datasets using pixel wise attention

Show simple item record

dc.contributor.author Arora, Aadya
dc.contributor.author Namboodiri, Vinay
dc.coverage.spatial India
dc.date.accessioned 2025-05-01T15:06:28Z
dc.date.available 2025-05-01T15:06:28Z
dc.date.issued 2025-04-06
dc.identifier.citation Arora, Aadya and Namboodiri, Vinay, "MedFocusCLIP: improving few shot classification in medical datasets using pixel wise attention", in the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2025), Hyderabad, IN, Apr. 06-11, 2025.
dc.identifier.uri https://doi.org/10.1109/ICASSP49660.2025.10889617
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/11370
dc.description.abstract With the popularity of foundational models, parameter efficient fine tuning has become the defacto approach to leverage pretrained models to perform downstream tasks. Taking inspiration from recent advances in large language models, Visual Prompt Tuning, and similar techniques, learn an additional prompt to efficiently finetune a pretrained vision foundational model. However, we observe that such prompting is insufficient for fine-grained visual classification tasks such as medical image classification, where there is large inter-class variance, and small intra-class variance. Hence, in this paper we propose to leverage advanced segmentation capabilities of Segment Anything Model 2 [1] (SAM2) as a visual prompting cue to help visual encoder in the CLIP [2] (Contrastive Language-Image Pretraining) by guiding the attention in CLIP visual encoder to relevant regions in the image. This helps the model to focus on highly discriminative regions, without getting distracted from visually similar background features, an essential requirement in a fewshot, finegrained classification setting. We evaluate our method on diverse medical datasets including X-rays, CT scans, and MRI images, and report an accuracy of (71%, 81%, 86%, 58%) from the proposed approach on (COVID, lung-disease, brain-tumor, breast-cancer) datasets against (66%, 70%, 68%, 29%) from a pretrained CLIP model after fewshot training. The proposed approach also allows to obtain interpretable explanation for the classification performance through the localization obtained using segmentation. For demonstrations and visualizations, please visit https://aadya-arora.github.io/MedFocusClip/
dc.description.statementofresponsibility by Aadya Arora and Vinay Namboodiri
dc.language.iso en_US
dc.subject Visual prompting
dc.subject Few shot classification
dc.subject Medical image analysis
dc.subject Vision-language models
dc.title MedFocusCLIP: improving few shot classification in medical datasets using pixel wise attention
dc.type Conference Paper
dc.relation.journal IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2025)


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account