Home Page




Editorial Board

Open Source Software

Proceedings (PMLR)

Transactions (TMLR)




Frequently Asked Questions

Contact Us

RSS Feed

Overparameterization of Deep ResNet: Zero Loss and Mean-field Analysis

Zhiyan Ding, Shi Chen, Qin Li, Stephen J. Wright; 23(48):1−65, 2022.


Finding parameters in a deep neural network (NN) that fit training data is a nonconvex optimization problem, but a basic first-order optimization method (gradient descent) finds a global optimizer with perfect fit (zero-loss) in many practical situations. We examine this phenomenon for the case of Residual Neural Networks (ResNet) with smooth activation functions in a limiting regime in which both the number of layers (depth) and the number of weights in each layer (width) go to infinity. First, we use a mean-field-limit argument to prove that the gradient descent for parameter training becomes a gradient flow for a probability distribution that is characterized by a partial differential equation (PDE) in the large-NN limit. Next, we show that under certain assumptions, the solution to the PDE converges in the training time to a zero-loss solution. Together, these results suggest that the training of the ResNet gives a near-zero loss if the ResNet is large enough. We give estimates of the depth and width needed to reduce the loss below a given threshold, with high probability.

© JMLR 2022. (edit, beta)