Model Selection for Regularized Least-Squares Classification

Regularized Least-Squares Classification (RLSC) can be regarded as a kind of 2 layers neural network using regularized square loss function and kernel trick. Poggio and Smale recently reformulated it in the framework of the mathematical foundations of learning and called it a key algorithm of learni...

Full description

Saved in:
Bibliographic Details
Published inAdvances in Natural Computation pp. 565 - 572
Main Authors Yang, Hui-Hua, Wang, Xing-Yu, Wang, Yong, Gao, Hai-Hua
Format Book Chapter Conference Proceeding
LanguageEnglish
Published Berlin, Heidelberg Springer Berlin Heidelberg 2005
Springer
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Regularized Least-Squares Classification (RLSC) can be regarded as a kind of 2 layers neural network using regularized square loss function and kernel trick. Poggio and Smale recently reformulated it in the framework of the mathematical foundations of learning and called it a key algorithm of learning theory. The generalization performance of RLSC depends heavily on the setting of its kernel and hyper parameters. Therefore we presented a novel two-step approach for optimal parameters selection: firstly the optimal kernel parameters are selected by maximizing kernel target alignment, and then the optimal hyper-parameter is determined via minimizing RLSC’s leave-one-out bound. Compared with traditional grid search, our method needs no independent validation set. We worked on IDA’s benchmark datasets using Gaussian kernel, the results demonstrate that our method is feasible and time efficient.
ISBN:3540283234
9783540283232
ISSN:0302-9743
1611-3349
DOI:10.1007/11539087_72