Loading [MathJax]/jax/output/HTML-CSS/jax.js

Optimal Bump Functions for Shallow ReLU networks: Weight Decay, Depth Separation, Curse of Dimensionality

Stephan Wojtowytsch.

Year: 2024, Volume: 25, Issue: 27, Pages: 1−49


Abstract

In this note, we study how neural networks with a single hidden layer and ReLU activation interpolate data drawn from a radially symmetric distribution with target labels 1 at the origin and 0 outside the unit ball, if no labels are known inside the unit ball. With weight decay regularization and in the infinite neuron, infinite data limit, we prove that a unique radially symmetric minimizer exists, whose average parameters and Lipschitz constant grow as d and d respectively. We furthermore show that the average weight variable grows exponentially in d if the label 1 is imposed on a ball of radius ε rather than just at the origin. By comparison, a neural networks with two hidden layers can approximate the target function without encountering the curse of dimensionality.

PDF BibTeX