Home Page

Papers

Submissions

News

Scope

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Login



RSS Feed

Margins, Shrinkage, and Boosting

Matus Telgarsky
;
JMLR W&CP 28 (2) : 307–315, 2013

Abstract

This manuscript shows that AdaBoost and its immediate variants can produce approximately maximum margin classifiers simply by scaling their step size choices by a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman’s empirically successful “shrinkage” procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss.

Related Material