Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Toward Understanding Convolutional Neural Networks from Volterra Convolution Perspective

Tenghui Li, Guoxu Zhou, Yuning Qiu, Qibin Zhao; 23(311):1−50, 2022.

Abstract

We make an attempt to understand convolutional neural network by exploring the relationship between (deep) convolutional neural networks and Volterra convolutions. We propose a novel approach to explain and study the overall characteristics of neural networks without being disturbed by the horribly complex architectures. Specifically, we attempt to convert the basic structures of a convolutional neural network (CNN) and their combinations to the form of Volterra convolutions. The results show that most of convolutional neural networks can be approximated in the form of Volterra convolution, where the approximated proxy kernels preserve the characteristics of the original network. Analyzing these proxy kernels may give valuable insight about the original network. Based on this setup, we present methods to approximate the order-zero and order-one proxy kernels, and verify the correctness and effectiveness of our results.

[abs][pdf][bib]        [code]
© JMLR 2022. (edit, beta)

Mastodon