Sukhbaatar, T. Makino, K. Aihara & T. Chikayama; JMLR W&CP 20:231–246,
Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets
We propose a Deep Belief Net model for robust motion generation, which consists of two
layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for
encoding real-valued spatial patterns of motion frames into compact representations. The upper
layer has one conditional RBM for learning temporal constraints on transitions between those
compact representations. This separation of spatial and temporal learning makes it possible to
reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait
transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control.
We trained the model with human motion capture data and the results of motion generation are
Page last modified on Sun Nov 6 15:43:39 2011.