Unified Visual Perception Model for context-aware wearable AR

We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics,...

Full description

Saved in:
Bibliographic Details
Published in2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 1 - 4
Main Authors Youngkyoon Jang, Woo, Woontack
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science. The proposed model consists of Working Memory (WM) in charge of low-level processing (in a bottomup manner), Long-Term Memory (LTM) and Short-Term Memory (STM), which are in charge of high-level processing (in a top-down manner). WM and LTM/STM are mutually complementary to increase recognition accuracies. By implementing the initial prototype of each boxes of the model, we could know that the proposed model works for stable object recognition. The proposed model is available to support context-aware AR with the optical see-through HMD.
DOI:10.1109/ISMAR.2013.6671818