Latent sparse subspace learning and visual domain classification via balanced distribution alignment and Hilbert–Schmidt metric

Getting machine learning (ML) to perform accurate prediction needs a sufficient number of labeled samples. However, due to the either lack or small number of labeled samples in most domains, it is often beneficial to use domain adaptation (DA) and transfer learning (TL) to leverage a related auxilia...

Full description

Saved in:
Bibliographic Details
Published inPattern analysis and applications : PAA Vol. 28; no. 1
Main Authors Noori Saray, Shiva, Balafar, Mohammad-Ali, Tahmoresnezhad, Jafar
Format Journal Article
LanguageEnglish
Published London Springer London 01.03.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Getting machine learning (ML) to perform accurate prediction needs a sufficient number of labeled samples. However, due to the either lack or small number of labeled samples in most domains, it is often beneficial to use domain adaptation (DA) and transfer learning (TL) to leverage a related auxiliary source domain to optimize the performance on target domain. In fact, the purpose of TL and DA is to use the labeled sample information (i.e., samples and the corresponding labels) for training the classifier to categorize the unlabeled samples. In this paper, we aim to propose a novel semi-supervised transfer learning method entitled “Latent Sparse subspace learning and visual domain classification via Balanced distribution alignment and Hilbert–Schmidt metric (LSBH)”. LSBH uses the latent sparse domain transfer learning for visual adaptation (LSDT) to adapt the samples with different distributions or feature spaces across domains and prevent the creation of local common subspace for source and target domains via the simultaneous learning of latent space and sparse reconstruction. LSBH proposes a novel robust classifier which maintains performance and accuracy even when faced with variations across the source and target domains. To this end, it utilizes the following two criteria in the optimization problem: maximum mean discrepancy and Hilbert–Schmidt independence criterion to reduce the marginal and conditional distribution disparities of domains and increase the dependency between samples and labels at the classification step. LSBH obtains the optimal coefficients for the classifier, which results in the minimum error in the loss function by solving the optimization problem. Thus, the error minimizing of the loss function is a part of the optimization problem. Also, to maintain the geometric structure of data in the classification step, the neighborhood graph of samples is used. The efficiency of the proposed method has been evaluated on different visual datasets and has been compared with new and prominent methods of domain adaptation and transfer learning. The results induce the superior performance of LSBH compared to the other state-of-the-art methods in label prediction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-024-01390-w