Variational Multi-Prototype Encoder for Object Recognition Using Multiple Prototype Images

In the recent research of Variational Prototyping-Encoder (VPE), the problem of classifying 2D flat objects of the unseen class has been addressed. VPE solves this problem by pre-learning the image translation task from real-world object images to their corresponding prototype images as a meta-task....

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 10; pp. 19586 - 19598
Main Authors Kang, Jun Seok, Ahn, Sang Chul
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the recent research of Variational Prototyping-Encoder (VPE), the problem of classifying 2D flat objects of the unseen class has been addressed. VPE solves this problem by pre-learning the image translation task from real-world object images to their corresponding prototype images as a meta-task. VPE uses a single prototype for each object class. However, in general, a single prototype is not sufficient to represent a generic object class because the appearance can change significantly according to viewpoints and other factors. In this case, using VPE and a single prototype for each class in training can result in overfitting or performance degradation. One solution may be the use of multiple prototypes. However, this also requires costly sub-labeling for dividing the input class into smaller classes and assigning a prototype to each. Therefore, we propose a new learning method, the variational multi-prototype encoder (VaMPE), which can overcome the limitations of VPE and use multiple prototypes for each object class. The proposed method does not require additional sub-labeling other than simply adding multiple prototypes to each class. Through various experiments, we demonstrate that the proposed method outperforms VPE.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3151856