Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives
Antoine Dedieu, Hussein Hazimeh, Rahul Mazumder; 22(135):1−47, 2021.
Abstract
We consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features. Recent work has shown that mixed integer programming (MIP) can be used to solve (to optimality) ℓ0-regularized regression problems at scales much larger than what was conventionally considered possible. Despite their usefulness, MIP-based global optimization approaches are significantly slower than the relatively mature algorithms for ℓ1-regularization and heuristics for nonconvex regularized problems. We aim to bridge this gap in computation times by developing new MIP-based algorithms for ℓ0-regularized classification. We propose two classes of scalable algorithms: an exact algorithm that can handle p≈50,000 features in a few minutes, and approximate algorithms that can address instances with p≈106 in times comparable to the fast ℓ1-based algorithms. Our exact algorithm is based on the novel idea of \textsl{integrality generation}, which solves the original problem (with p binary variables) via a sequence of mixed integer programs that involve a small number of binary variables. Our approximate algorithms are based on coordinate descent and local combinatorial search. In addition, we present new estimation error bounds for a class of ℓ0-regularized estimators. Experiments on real and synthetic data demonstrate that our approach leads to models with considerably improved statistical performance (especially variable selection) compared to competing methods.
© JMLR 2021. (edit, beta) |