Visual navigation method for indoor mobile robot based on extended BoW model

This article proposes a new navigation method for mobile robots based on an extended bag of words (BoW) model for general object recognition in indoor environments. The scale-invariant feature transform (SIFT)-detection algorithm with the graphic processing unit (GPU) acceleration technology is used...

Full description

Saved in:
Bibliographic Details
Published inCAAI Transactions on Intelligence Technology Vol. 2; no. 4; pp. 142 - 147
Main Authors Li, Xianghui, Li, Xinde, Khyam, Mohammad Omar, Luo, Chaomin, Tan, Yingzi
Format Journal Article
LanguageEnglish
Published Beijing The Institution of Engineering and Technology 01.12.2017
John Wiley & Sons, Inc
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This article proposes a new navigation method for mobile robots based on an extended bag of words (BoW) model for general object recognition in indoor environments. The scale-invariant feature transform (SIFT)-detection algorithm with the graphic processing unit (GPU) acceleration technology is used to describe feature vectors in this model. First, in order to add some redundant image information, statistical information of the spatial relationships of all the feature points in an image, i.e. relative distances and angles, is used to extend the feature vectors in the original BoW model. Then, the support vector machine (SVM) classifier is used to classify objects. Also, in order to navigate conveniently in unknown and dynamic indoor environments, a type of human–robot interaction method based on a hand-drawn semantic map is considered. The experimental results show that this new navigation technology for indoor mobile robots is very robust and highly effective.
ISSN:2468-2322
2468-2322
DOI:10.1049/trit.2017.0020