Loading [MathJax]/jax/output/HTML-CSS/jax.js



Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Deep Network Approximation: Beyond ReLU to Diverse Activation Functions

Shijun Zhang, Jianfeng Lu, Hongkai Zhao; 25(35):1−39, 2024.

Abstract

This paper explores the expressive power of deep neural networks for a diverse range of activation functions. An activation function set A is defined to encompass the majority of commonly used activation functions, such as ReLU, LeakyReLU, ReLU2, ELU, CELU, SELU, Softplus, GELU, SiLU, Swish, Mish, Sigmoid, Tanh, Arctan, Softsign, dSiLU, and SRS. We demonstrate that for any activation function ϱA, a ReLU network of width N and depth L can be approximated to arbitrary precision by a ϱ-activated network of width 3N and depth 2L on any bounded set. This finding enables the extension of most approximation results achieved with ReLU networks to a wide variety of other activation functions, albeit with slightly increased constants. Significantly, we establish that the (width,depth) scaling factors can be further reduced from (3,2) to (1,1) if ϱ falls within a specific subset of A. This subset includes activation functions such as ELU, CELU, SELU, Softplus, GELU, SiLU, Swish, and Mish.

[abs][pdf][bib]       
© JMLR 2024. (edit, beta)

Mastodon