IPE Transformer for Depth Completion with Input-Aware Positional Embeddings
In contrast to traditional transformer blocks using a set of pre-defined parameters as positional embeddings, we propose the input-aware positional embedding (IPE) which is dynamically generated according to the input feature. We implement this idea by designing the IPE transformer, which enjoys str...
Saved in:
Published in | Pattern Recognition and Computer Vision Vol. 13022; pp. 263 - 275 |
---|---|
Main Authors | , , , , , , |
Format | Book Chapter |
Language | English |
Published |
Switzerland
Springer International Publishing AG
2021
Springer International Publishing |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In contrast to traditional transformer blocks using a set of pre-defined parameters as positional embeddings, we propose the input-aware positional embedding (IPE) which is dynamically generated according to the input feature. We implement this idea by designing the IPE transformer, which enjoys stronger generalization powers across arbitrary input sizes. To verify its effectiveness, we integrate the newly-designed transformer into NLSPN and GuideNet, two remarkable depth completion networks. The experimental result on a large scale outdoor depth completion dataset shows that the proposed transformer can effectively model long-range dependency with a manageable memory overhead. |
---|---|
ISBN: | 3030880125 9783030880125 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-030-88013-2_22 |