Approximation Hardness for A Class of Sparse Optimization Problems
Yichen Chen, Yinyu Ye, Mengdi Wang; 20(38):1−27, 2019.
Abstract
In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to an $n\times d$ problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2<1$. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P $=$ NP.
[abs]
[pdf][bib]© JMLR 2019. (edit, beta) |