Deep Boltzmann Machines as Feed-Forward Hierarchies
Gregoire Montavon, Mikio Braun, Klaus-Robert Muller ; JMLR W&CP 22: 798-804, 2012.
The deep Boltzmann machine is a powerful model that extracts the hierarchical structure of observed data. While inference is typically slow due to its undirected nature, we argue that the emerging feature hierarchy is still explicit enough to be traversed in a feed-forward fashion. The claim is corroborated by training a set of deep neural networks on real data and measuring the evolution of the representation layer after layer. The analysis reveals that the deep Boltzmann machine produces a feed-forward hierarchy of increasingly invariant representations that clearly surpasses the layer-wise approach.