EV-GAN: Simulation of extreme events with ReLU neural networks
Michaël Allouche, Stéphane Girard, Emmanuel Gobet; 23(150):1−39, 2022.
Abstract
Feedforward neural networks based on Rectified linear units (ReLU) cannot efficiently approximate quantile functions which are not bounded, especially in the case of heavy-tailed distributions. We thus propose a new parametrization for the generator of a Generative adversarial network (GAN) adapted to this framework, basing on extreme-value theory. An analysis of the uniform error between the extreme quantile and its GAN approximation is provided: We establish that the rate of convergence of the error is mainly driven by the second-order parameter of the data distribution. The above results are illustrated on simulated data and real financial data. It appears that our approach outperforms the classical GAN in a wide range of situations including high-dimensional and dependent data.
[abs]
[pdf][bib]© JMLR 2022. (edit, beta) |