Efficient piecewise learning for conditional random fields

Conditional Random Field models have proved effective for several low-level computer vision problems. Inference in these models involves solving a combinatorial optimization problem, with methods such as graph cuts, belief propagation. Although several methods have been proposed to learn the model p...

Full description

Saved in:
Bibliographic Details
Published in2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp. 895 - 901
Main Authors Alahari, Karteek, Russell, Chris, Torr, Philip H S
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2010
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Conditional Random Field models have proved effective for several low-level computer vision problems. Inference in these models involves solving a combinatorial optimization problem, with methods such as graph cuts, belief propagation. Although several methods have been proposed to learn the model parameters from training data, they suffer from various drawbacks. Learning these parameters involves computing the partition function, which is intractable. To overcome this, state-of-the-art structured learning methods frame the problem as one of large margin estimation. Iterative solutions have been proposed to solve the resulting convex optimization problem. Each iteration involves solving an inference problem over all the labels, which limits the efficiency of these structured methods. In this paper we present an efficient large margin piece-wise learning method which is widely applicable. We show how the resulting optimization problem can be reduced to an equivalent convex problem with a small number of constraints, and solve it using an efficient scheme. Our method is both memory and computationally efficient. We show results on publicly available standard datasets.
ISBN:1424469848
9781424469840
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2010.5540123