VGG-CAE: Unsupervised Visual Place Recognition Using VGG16-Based Convolutional Autoencoder

Visual Place Recognition (VPR) is a challenging task in Visual Simultaneous Localization and Mapping (VSLAM), which expects to find out paired images corresponding to the same place in different conditions. Although most methods based on Convolutional Neural Network (CNN) perform well, they require...

Full description

Saved in:
Bibliographic Details
Published inPattern Recognition and Computer Vision Vol. 13020; pp. 91 - 102
Main Authors Xu, Zhenyu, Zhang, Qieshi, Hao, Fusheng, Ren, Ziliang, Kang, Yuhang, Cheng, Jun
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2021
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Visual Place Recognition (VPR) is a challenging task in Visual Simultaneous Localization and Mapping (VSLAM), which expects to find out paired images corresponding to the same place in different conditions. Although most methods based on Convolutional Neural Network (CNN) perform well, they require a large number of annotated images for supervised training, which is time and energy consuming. Thus, to train the CNN in an unsupervised way and achieve better performance, we propose a new place recognition method in this paper. We design a VGG16-based Convolutional Autoencoder (VGG-CAE), which uses the features outputted by VGG16 as the label of images. In this case, VGG-CAE learns the latent representation from the label of images and improves the robustness against appearance and viewpoint variation. When deploying VGG-CAE, features are extracted from query images and reference images with post-processing, the Cosine similarities of features are calculated respectively and a matrix for feature matching is formed accordingly. To verify the performance of our method, we conducted experiments with several public datasets, showing our method achieves competitive results comparing to existing approaches.
ISBN:9783030880064
3030880060
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-88007-1_8