Learning optimally separated class-specific subspace representations using convolutional autoencoder
In this work, we propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations that are best suited for classification task. The class-specific data is assumed to lie in low dimensional linear subspaces, which could be noisy and not well separated...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.05.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this work, we propose a novel convolutional autoencoder based architecture
to generate subspace specific feature representations that are best suited for
classification task. The class-specific data is assumed to lie in low
dimensional linear subspaces, which could be noisy and not well separated,
i.e., subspace distance (principal angle) between two classes is very low. The
proposed network uses a novel class-specific self expressiveness (CSSE) layer
sandwiched between encoder and decoder networks to generate class-wise subspace
representations which are well separated. The CSSE layer along with encoder/
decoder are trained in such a way that data still lies in subspaces in the
feature space with minimum principal angle much higher than that of the input
space. To demonstrate the effectiveness of the proposed approach, several
experiments have been carried out on state-of-the-art machine learning datasets
and a significant improvement in classification performance is observed over
existing subspace based transformation learning methods. |
---|---|
DOI: | 10.48550/arxiv.2105.08865 |