Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Sparse Boosting

Peter Bühlmann, Bin Yu; 7(36):1001−1024, 2006.

Abstract

We propose Sparse Boosting (the SparseL2Boost algorithm), a variant on boosting with the squared error loss. SparseL2Boost yields sparser solutions than the previously proposed L2Boosting by minimizing some penalized L2-loss functions, the FPE model selection criteria, through small-step gradient descent. Although boosting may give already relatively sparse solutions, for example corresponding to the soft-thresholding estimator in orthogonal linear models, there is sometimes a desire for more sparseness to increase prediction accuracy and ability for better variable selection: such goals can be achieved with SparseL2Boost.

We prove an equivalence of SparseL2Boost to Breiman's nonnegative garrote estimator for orthogonal linear models and demonstrate the generic nature of SparseL2Boost for nonparametric interaction modeling. For an automatic selection of the tuning parameter in SparseL2Boost we propose to employ the gMDL model selection criterion which can also be used for early stopping of L2Boosting. Consequently, we can select between SparseL2Boost and L2Boosting by comparing their gMDL scores.

[abs][pdf][bib]       
© JMLR 2006. (edit, beta)

Mastodon