MLPs (Mono-Layer Polynomials and Multi-Layer Perceptrons) for Nonlinear Modeling
Isabelle Rivals, Léon Personnaz;
3(Mar):1383-1398, 2003.
Abstract
This paper presents a model selection procedure which stresses the importance of the classic polynomial models as tools for evaluating the complexity of a given modeling problem, and for removing non-significant input variables. If the complexity of the problem makes a neural network necessary, the selection among neural candidates can be performed in two phases. In an additive phase, the most important one, candidate neural networks with an increasing number of hidden neurons are trained. The addition of hidden neurons is stopped when the effect of the round-off errors becomes significant, so that, for instance, confidence intervals cannot be accurately estimated. This phase leads to a set of approved candidate networks. In a subsequent subtractive phase, a selection among approved networks is performed using statistical Fisher tests. The series of tests starts from a possibly too large unbiased network (the full network), and ends with the smallest unbiased network whose input variables and hidden neurons all have a significant contribution to the regression estimate. This method was successfully tested against the real-world regression problems proposed at the NIPS2000 Unlabeled Data Supervised Learning Competition; two of them are included here as illustrative examples.
[abs]
[pdf]
[ps.gz]
[ps]
[data]