Papers
arxiv:2602.20423

MedCLIPSeg: Probabilistic Vision-Language Adaptation for Data-Efficient and Generalizable Medical Image Segmentation

Published on Feb 23
· Submitted by
Taha Koleilat
on Feb 27
Authors:
,
,
,
,

Abstract

MedCLIPSeg adapts CLIP for medical image segmentation by leveraging patch-level embeddings and probabilistic attention to achieve data-efficient, uncertain-aware segmentation with interpretability.

AI-generated summary

Medical image segmentation remains challenging due to limited annotations for training, ambiguous anatomical features, and domain shifts. While vision-language models such as CLIP offer strong cross-modal representations, their potential for dense, text-guided medical image segmentation remains underexplored. We present MedCLIPSeg, a novel framework that adapts CLIP for robust, data-efficient, and uncertainty-aware medical image segmentation. Our approach leverages patch-level CLIP embeddings through probabilistic cross-modal attention, enabling bidirectional interaction between image and text tokens and explicit modeling of predictive uncertainty. Together with a soft patch-level contrastive loss that encourages more nuanced semantic learning across diverse textual prompts, MedCLIPSeg effectively improves data efficiency and domain generalizability. Extensive experiments across 16 datasets spanning five imaging modalities and six organs demonstrate that MedCLIPSeg outperforms prior methods in accuracy, efficiency, and robustness, while providing interpretable uncertainty maps that highlight local reliability of segmentation results. This work demonstrates the potential of probabilistic vision-language modeling for text-driven medical image segmentation.

Community

Paper author Paper submitter

MedCLIPSeg introduces a probabilistic adaptation of CLIP for medical image segmentation, addressing key challenges such as limited annotations, ambiguous anatomical boundaries, and domain shift across imaging devices and institutions. The method proposes a Probabilistic Vision–Language (PVL) Adapter that enables bidirectional interaction between visual patch tokens and textual prompts while modeling uncertainty in attention through probabilistic keys and values. This design allows the model to down-weight uncertain features and produce calibrated predictions alongside uncertainty maps.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.20423 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.