Manifold Regularization and Semi-supervised Learning: Some Theoretical Analyses
Partha Niyogi; 14(37):1229−1250, 2013.
Abstract
Manifold regularization (Belkin et al., 2006) is a geometrically motivated framework for machine learning within which several semi- supervised algorithms have been constructed. Here we try to provide some theoretical understanding of this approach. Our main result is to expose the natural structure of a class of problems on which manifold regularization methods are helpful. We show that for such problems, no supervised learner can learn effectively. On the other hand, a manifold based learner (that knows the manifold or learns it from unlabeled examples) can learn with relatively few labeled examples. Our analysis follows a minimax style with an emphasis on finite sample results (in terms of $n$: the number of labeled examples). These results allow us to properly interpret manifold regularization and related spectral and geometric algorithms in terms of their potential use in semi-supervised learning.
[abs]
[pdf][bib]© JMLR 2013. (edit, beta) |