Camera localization with Siamese neural networks using iterative relative pose estimation

Abstract This paper presents a novel deep learning-based camera localization method using iterative relative pose estimation to improve the accuracy of pose estimation from a single RGB image. Although most existing deep learning-based camera localization methods are more robust for textureless case...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational design and engineering Vol. 9; no. 4; pp. 1482 - 1497
Main Authors Kim, Daewoon, Ko, Kwanghee
Format Journal Article
LanguageEnglish
Published Oxford University Press 01.08.2022
한국CDE학회
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract This paper presents a novel deep learning-based camera localization method using iterative relative pose estimation to improve the accuracy of pose estimation from a single RGB image. Although most existing deep learning-based camera localization methods are more robust for textureless cases, illumination changes, and occlusions, they are less accurate than other non-deep learning-based methods. The proposed method improved the localization accuracy by using the relative poses between the input image and the training dataset images. It simultaneously trained the network for the absolute poses of the input images and their relative poses using Siamese networks. In the inference stage, it estimated the absolute pose of a query image and iteratively updated the pose using relative pose information. Real world examples with widely used camera localization datasets and our dataset were utilized to validate the performance of the proposed method, which exhibited higher localization accuracy than the state-of-the-art deep learning-based camera localization methods. In the end, the application of the proposed method to augmented reality was presented. Graphical Abstract Graphical Abstract
ISSN:2288-5048
2288-4300
2288-5048
DOI:10.1093/jcde/qwac066