Ultra Real-Time Portrait Matting via Parallel Semantic Guidance
Most existing portrait matting models either require expensive auxiliary information or try to decompose the task into sub-tasks that are usually resource-hungry. These challenges limit its application on low-power computing devices. In this paper, we propose an ultra-light-weighted portrait matting...
Saved in:
Published in | ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 1 - 5 |
---|---|
Main Authors | , , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
04.06.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Most existing portrait matting models either require expensive auxiliary information or try to decompose the task into sub-tasks that are usually resource-hungry. These challenges limit its application on low-power computing devices. In this paper, we propose an ultra-light-weighted portrait matting network via parallel semantic guidance (PSGNet) for real-time portrait matting without any auxiliary inputs. PSGNet leverages parallel multi-level semantic information to efficiently guide the feature representations to replace traditional sequential semantic hints from objective decomposition. We also introduce an efficient fusion module to effectively combine parallel branches of PSGNet to minimize the representation redundancy. Comprehensive experiments demonstrate that our PSGNet can achieve remarkable performance on both synthetic and real-world images. Our PSGNet is capable to process at 100fps thanks to its ultra-small number of parameters, which makes it deployable on low-power computing devices without compromising on the performance of real-time portrait matting. |
---|---|
ISSN: | 2379-190X |
DOI: | 10.1109/ICASSP49357.2023.10097034 |