Transfer Learning by Kernel Meta-Learning
AbstractA crucial issue in machine learning is how to learn appropriate representations for data. Recently, much work has been devoted to kernel learning, that is, the problem of ﬁnding a good kernel matrix for a given task. This can be done in a semi-supervised learning setting by using a large set of unlabeled data and a (typically small) set of i.i.d. labeled data. Another, even more challenging problem, is how one can exploit partially labeled data of a source task to learn good representations for a diﬀerent, but related, target task. This is the main subject of transfer learning.
In this paper, we present a novel approach to transfer learning based on kernel learning. Speciﬁcally, we propose a kernel meta-learning algorithm which, starting from a basic kernel, tries to learn chains of kernel transforms that are able to produce good kernel matrices for the source tasks. The same sequence of transformations can be then applied to compute the kernel matrix for new related target tasks. We report on the application of this method to the ﬁve datasets of the Unsupervised and Transfer Learning (UTL) challenge benchmark1 , where we won the ﬁrst phase of the competition.