Performance Evaluation of Segment Anything Model with Variational Prompting for Application to Non-Visible Spectrum Imagery

Department of Computer Science, Durham University, UK
PBVS'24@CVPR 2024
Descriptive Text

We propose evaluating three prompting strategies (bounding box - bbox, centroid, random point - randpt) to assess the effectiveness of the Segment Anything Model applied to X-ray and infrared imagery for identifying objects of interest. The bbox prompt yields superior segmentation results, while the other two prompting strategies demonstrate notably higher incorrect/missed predictions.

Abstract

The Segment Anything Model (SAM) is a deep neural network foundational model designed to perform instance segmentation which has gained significant popularity given its zero-shot segmentation ability.SAM operates by generating masks based on various input prompts such as text, bounding boxes, points, or masks, introducing a novel methodology to overcome the constraints posed by dataset-specific scarcity. While SAM is trained on an extensive dataset, comprising more than $11M$ images, it mostly consists of natural photographic (visible band) images with only very limited images from other modalities. Whilst the rapid progress in visual infrared surveillance and X-ray security screening imaging technologies, driven forward by advances in deep learning, has significantly enhanced the ability to detect, classify and segment objects with high accuracy, it is not evident if the SAM zero-shot capabilities can be transferred to such modalities beyond the visible spectrum. For this reason, this work comprehensively assesses SAM capabilities in segmenting objects of interest in the X-ray and infrared imaging modalities. Our approach reuses and preserves the pre-trained SAM with three different prompts, namely bounding box, centroid and random points. We present several quantitative and qualitative results to showcase the performance of SAM on selected datasets. Our results show that SAM can segment objects in the X-ray modality when given a box prompt, but its performance varies for point prompts. Specifically, SAM performs poorly in segmenting slender objects and organic materials, such as plastic bottles. Additionally, we find that infrared objects are also challenging to segment with point prompts given the low-contrast nature of this modality. Overall, this study shows that while SAM demonstrates outstanding zero-shot capabilities with box prompts, its performance ranges from moderate to poor for point prompts, indicating that special consideration on the cross-modal generalisation of SAM is needed when considering use on X-ray and infrared imagery.

Proposed Architecture Diagram

Descriptive Text

Given an input image, Segment Anything Model (SAM) initiates the process by generating image embeddings via an image encoder. These embeddings are subsequently interactively queried by variational prompts (bounding box, centroid, and random points) in order to generate precise segmentation masks for the objects of interest.

Segmentation Results

Descriptive Text

The segmentation results obtained by SAM, utilising variational prompting strategies, are examined across PIDray, CLCXray, DBF6, and FLIR datasets. Notably, the prompt bbox consistently yields the most accurate segmentations. However, the other two prompting strategies occasionally encounter challenges, particularly in scenarios where objects are overlapped and cluttered, as observed in the X-ray datasets.

Poster Presentation

Descriptive Text

CVPR PBVS poster presentation.

BibTeX

@inproceedings{gaus24segment,
 author = {Gaus, Y.F.A. and Bhowmik, N. and Isaac-Medina, B.K.S. and Breckon, T.P.},
 title = {Performance Evaluation of Segment Anything Model with Variational Prompting for Application to Non-Visible Spectrum Imagery},
 booktitle = {Proc. Computer Vision and Pattern Recognition Workshops},
 year = {2024},
 month = {June},
 publisher = {IEEE},
 keywords = {x-ray, thermal, infrared, foundational model, SAM, segmentation, semantic segmentation, instance segmantation},
 url = {https://breckon.org/toby/publications/papers/gaus24segment.pdf},
 arxiv = {https://arxiv.org/abs/2404.12285},
 note = {to appear},
 category = {baggage},
}