Distributed optimization for multi-task learning via nuclear-norm approximation∗∗The authors are with the Department of Mechanical and Aerospace Engineering, University of California, San Diego, USA

We exploit a variational characterization of the nuclear norm to extend the framework of distributed convex optimization to machine learning problems that focus on the sparsity of the aggregate solution. We propose two distributed dynamics that can be used for multi-task feature learning and recomme...

Full description

Saved in:
Bibliographic Details
Published inIFAC-PapersOnLine Vol. 48; no. 22; pp. 64 - 69
Main Authors Mateos-Núñez, David, Cortés, Jorge, Cortes, Jorge
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We exploit a variational characterization of the nuclear norm to extend the framework of distributed convex optimization to machine learning problems that focus on the sparsity of the aggregate solution. We propose two distributed dynamics that can be used for multi-task feature learning and recommender systems in scenarios with more tasks or users than features. Our first dynamics tackles a convex minimization on local decision variables subject to agreement on a set of local auxiliary matrices. Our second dynamics employs a saddle-point reformulation through Fenchel conjugation of quadratic forms, avoiding the computation of the inverse of the local matrices. We show the correctness of both coordination algorithms using a general analytical framework developed in our previous work that combines distributed optimization and subgradient methods for saddle-point problems.
ISSN:2405-8963
2405-8963
DOI:10.1016/j.ifacol.2015.10.308