SqueezeSAM: User friendly mobile interactive segmentation
The Segment Anything Model (SAM) has been a cornerstone in the field of interactive segmentation, propelling significant progress in generative AI, computational photography, and medical imaging. Despite its ability to process arbitrary user input and generate corresponding segmentation masks, SAM...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The Segment Anything Model (SAM) has been a cornerstone in the field of
interactive segmentation, propelling significant progress in generative AI,
computational photography, and medical imaging. Despite its ability to process
arbitrary user input and generate corresponding segmentation masks, SAM's 600
million parameter architecture, based on ViT-H, is not compatible with current
mobile hardware due to its high computational demands and large model size. Our
research aims to adapt SAM for use in mobile photography applications. To this
end, we have developed a fully convolutional SqueezeSAM model architecture,
which is 62.5 times faster and 31.6 times smaller than the original SAM, making
it a viable solution for mobile applications. Furthermore, our tiny model
achieves an mIOU within 1% of the original VIT-H architecture.
Automated segmentation holds significant value in the creation flow for
photography applications, as evidenced by its adoption by leading industry
players like apple and capcut. To facilitate this automation, we employ salient
object detection and simulate potential user clicks for foreground object
selection, generating an initial segmentation mask that users can subsequently
edit interactively. A common user expectation is that a click on a specific
part of an object will result in the segmentation of the entire object. For
example, a click on a person's t-shirt in a photo should ideally segment the
entire person, not just the t-shirt. However, SAM typically only segments the
clicked area. We address this limitation through a novel data augmentation
scheme. Consequently, if a user clicks on a person holding a basketball, both
the person and the basketball are segmented together, aligning with user
expectations and enhancing the overall user experience. |
---|---|
DOI: | 10.48550/arxiv.2312.06736 |