Sparse Online Learning via Truncated Gradient
John Langford, Lihong Li, Tong Zhang; 10(28):777−801, 2009.
Abstract
We propose a general method called truncated gradient to induce sparsity in the weights of online-learning algorithms with convex loss functions. This method has several essential properties:
- The degree of sparsity is continuous---a parameter controls the rate of sparsification from no sparsification to total sparsification.
- The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsification result in only small additional regret with respect to typical online-learning guarantees.
- The approach works well empirically.
[abs]
[pdf][bib]© JMLR 2009. (edit, beta) |