Personalized Dictionary Learning for Heterogeneous Datasets
We introduce a relevant yet challenging problem named Personalized Dictionary Learning (PerDL), where the goal is to learn sparse linear representations from heterogeneous datasets that share some commonality. In PerDL, we model each dataset's shared and unique features as global and local dict...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We introduce a relevant yet challenging problem named Personalized Dictionary
Learning (PerDL), where the goal is to learn sparse linear representations from
heterogeneous datasets that share some commonality. In PerDL, we model each
dataset's shared and unique features as global and local dictionaries.
Challenges for PerDL not only are inherited from classical dictionary learning
(DL), but also arise due to the unknown nature of the shared and unique
features. In this paper, we rigorously formulate this problem and provide
conditions under which the global and local dictionaries can be provably
disentangled. Under these conditions, we provide a meta-algorithm called
Personalized Matching and Averaging (PerMA) that can recover both global and
local dictionaries from heterogeneous datasets. PerMA is highly efficient; it
converges to the ground truth at a linear rate under suitable conditions.
Moreover, it automatically borrows strength from strong learners to improve the
prediction of weak learners. As a general framework for extracting global and
local dictionaries, we show the application of PerDL in different learning
tasks, such as training with imbalanced datasets and video surveillance. |
---|---|
DOI: | 10.48550/arxiv.2305.15311 |