Efficient SVM Training Using Low-Rank Kernel Representations
Shai Fine, Katya Scheinberg;
SVM training is a convex optimization problem
which scales with the training set size rather than the feature space
While this is usually considered to be a desired quality,
in large scale problems it may cause training to be impractical.
The common techniques to handle this difficulty basically build a solution
by solving a sequence of small scale subproblems.
Our current effort is concentrated on the rank of the kernel matrix as a
source for further enhancement of the training procedure. We first show that
for a low rank kernel matrix it is possible to design a better
interior point method (IPM) in terms of storage requirements
as well as computational complexity. We then suggest an efficient
use of a known factorization technique to approximate a given kernel matrix
by a low rank matrix, which in turn will be used to feed the optimizer.
Finally, we derive an upper bound on the change in the
value based on the approximation error and the number of active constraints
(support vectors). This bound is general in the sense that it holds
regardless of the approximation method.