Unsupervised and Transfer Learning Challenge:
a Deep Learning Approach
Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see ? for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe diﬀerent kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the ﬁnal phase of the challenge. It combined and stacked diﬀerent one-layer unsupervised learning algorithms, adapted to each of the ﬁve datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classiﬁer with a tiny number of labeled training samples (1 to 64 per class).