Efficient Output Kernel Learning for Multiple Tasks
The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other. While previously the relationship between tasks had to be user-defined in the form o...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.11.2015
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The paradigm of multi-task learning is that one can achieve better
generalization by learning tasks jointly and thus exploiting the similarity
between the tasks rather than learning them independently of each other. While
previously the relationship between tasks had to be user-defined in the form of
an output kernel, recent approaches jointly learn the tasks and the output
kernel. As the output kernel is a positive semidefinite matrix, the resulting
optimization problems are not scalable in the number of tasks as an
eigendecomposition is required in each step. \mbox{Using} the theory of
positive semidefinite kernels we show in this paper that for a certain class of
regularizers on the output kernel, the constraint of being positive
semidefinite can be dropped as it is automatically satisfied for the relaxed
problem. This leads to an unconstrained dual problem which can be solved
efficiently. Experiments on several multi-task and multi-class data sets
illustrate the efficacy of our approach in terms of computational efficiency as
well as generalization performance. |
---|---|
DOI: | 10.48550/arxiv.1511.05706 |