Unsupervised Domain Adaptation for 3D Point Clouds by Searched Transformations

Input-level domain adaptation reduces the burden of a neural encoder without supervision by reducing the domain gap at the input level. Input-level domain adaptation is widely employed in 2D visual domain, e.g. , images and videos, but is not utilized for 3D point clouds. We propose the use of input...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 10; pp. 56901 - 56913
Main Authors Kang, Dongmin, Nam, Yeongwoo, Kyung, Daeun, Choi, Jonghyun
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Input-level domain adaptation reduces the burden of a neural encoder without supervision by reducing the domain gap at the input level. Input-level domain adaptation is widely employed in 2D visual domain, e.g. , images and videos, but is not utilized for 3D point clouds. We propose the use of input-level domain adaptation for 3D point clouds, namely, point-level domain adaptation. Specifically, we propose to learn a transformation of 3D point clouds by searching the best combination of operations on point clouds that transfer data from the source domain to the target domain while maintaining the classification label without supervision of the target label. We decompose the learning objective into two terms, resembling domain shift and preserving label information. On the PointDA-10 benchmark dataset, our method outperforms state-of-the-art, unsupervised, point cloud domain adaptation methods by large margins (up to + 3.97 % in average).
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3176719