OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling
Constrained by the separate encoding of vision and language, existing grounding and referring segmentation works heavily rely on bulky Transformer-based fusion en-/decoders and a variety of early-stage interaction technologies. Simultaneously, the current mask visual language modeling (MVLM) fails t...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Constrained by the separate encoding of vision and language, existing
grounding and referring segmentation works heavily rely on bulky
Transformer-based fusion en-/decoders and a variety of early-stage interaction
technologies. Simultaneously, the current mask visual language modeling (MVLM)
fails to capture the nuanced referential relationship between image-text in
referring tasks. In this paper, we propose OneRef, a minimalist referring
framework built on the modality-shared one-tower transformer that unifies the
visual and linguistic feature spaces. To modeling the referential relationship,
we introduce a novel MVLM paradigm called Mask Referring Modeling (MRefM),
which encompasses both referring-aware mask image modeling and referring-aware
mask language modeling. Both modules not only reconstruct modality-related
content but also cross-modal referring content. Within MRefM, we propose a
referring-aware dynamic image masking strategy that is aware of the referred
region rather than relying on fixed ratios or generic random masking schemes.
By leveraging the unified visual language feature space and incorporating
MRefM's ability to model the referential relations, our approach enables direct
regression of the referring results without resorting to various complex
techniques. Our method consistently surpasses existing approaches and achieves
SoTA performance on both grounding and segmentation tasks, providing valuable
insights for future research. Our code and models are available at
https://github.com/linhuixiao/OneRef. |
---|---|
DOI: | 10.48550/arxiv.2410.08021 |