Mask Guided Gated Convolution for Amodal Content Completion
We present a model to reconstruct partially visible objects. The model takes a mask as an input, which we call weighted mask. The mask is utilized by gated convolutions to assign more weight to the visible pixels of the occluded instance compared to the background, while ignoring the features of the...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We present a model to reconstruct partially visible objects. The model takes
a mask as an input, which we call weighted mask. The mask is utilized by gated
convolutions to assign more weight to the visible pixels of the occluded
instance compared to the background, while ignoring the features of the
invisible pixels. By drawing more attention from the visible region, our model
can predict the invisible patch more effectively than the baseline models,
especially in instances with uniform texture. The model is trained on COCOA
dataset and two subsets of it in a self-supervised manner. The results
demonstrate that our model generates higher quality and more texture-rich
outputs compared to baseline models. Code is available at:
https://github.com/KaziwaSaleh/mask-guided. |
---|---|
DOI: | 10.48550/arxiv.2407.15203 |