Sparse Kernel Regression with Coefficient-based ℓq−regularization
Lei Shi, Xiaolin Huang, Yunlong Feng, Johan A.K. Suykens; 20(161):1−44, 2019.
Abstract
In this paper, we consider the ℓq−regularized kernel regression with 0<q≤1. In form, the algorithm minimizes a least-square loss functional adding a coefficient-based ℓq−penalty term over a linear span of features generated by a kernel function. We study the asymptotic behavior of the algorithm under the framework of learning theory. The contribution of this paper is two-fold. First, we derive a tight bound on the ℓ2−empirical covering numbers of the related function space involved in the error analysis. Based on this result, we obtain the convergence rates for the ℓ1−regularized kernel regression which is the best so far. Second, for the case 0<q<1, we show that the regularization parameter plays a role as a trade-off between sparsity and convergence rates. Under some mild conditions, the fraction of non-zero coefficients in a local minimizer of the algorithm will tend to 0 at a polynomial decay rate when the sample size m becomes large. As the concerned algorithm is non-convex, we also discuss how to generate a minimizing sequence iteratively, which can help us to search a local minimizer around any initial point.
© JMLR 2019. (edit, beta) |