Efficient Output Kernel Learning for Multiple Tasks

The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other. While previously the relationship between tasks had to be user-defined in the form o...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Jawanpuria, Pratik, Lapin, Maksim, Hein, Matthias, Schiele, Bernt
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.11.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other. While previously the relationship between tasks had to be user-defined in the form of an output kernel, recent approaches jointly learn the tasks and the output kernel. As the output kernel is a positive semidefinite matrix, the resulting optimization problems are not scalable in the number of tasks as an eigendecomposition is required in each step. \mbox{Using} the theory of positive semidefinite kernels we show in this paper that for a certain class of regularizers on the output kernel, the constraint of being positive semidefinite can be dropped as it is automatically satisfied for the relaxed problem. This leads to an unconstrained dual problem which can be solved efficiently. Experiments on several multi-task and multi-class data sets illustrate the efficacy of our approach in terms of computational efficiency as well as generalization performance.
ISSN:2331-8422