Home Page

Papers

Submissions

News

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Login



RSS Feed

How to Fake Multiply by a Gaussian Matrix

Michael Kapralov, Vamsi Potluru, David Woodruff
Proceedings of The 33rd International Conference on Machine Learning, pp. 2101–2110, 2016

Abstract

Have you ever wanted to multiply an \(n \times d\) matrix \(X\), with \(n \gg d\), on the left by an \(m \times n\) matrix \(\tilde G\) of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomized \(m \times n\) matrix \(T\), for which one can compute \(T \cdot X\) in only \(O(nnz(X)) + \tilde O(m^{1.5} \cdot d^{3})\) time, for which the total variation distance between the distributions \(T \cdot X\) and \(\tilde G \cdot X\) is as small as desired, i.e., less than any positive constant. Here \(nnz(X)\) denotes the number of non-zero entries of \(X\). Assuming \(nnz(X) \gg m^{1.5} \cdot d^{3}\), this is a significant savings over the naïve \(O(nnz(X) m)\) time to compute \(\tilde G \cdot X\). Moreover, since the total variation distance is small, we can provably use \(T \cdot X\) in place of \(\tilde G \cdot X\) in any application and have the same guarantees as if we were using \(\tilde G \cdot X\), up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM).

Related Material