B. Biggio, B. Nelson & P.
Laskov; JMLR W&CP 20:97–112, 2011.
Support Vector Machines Under Adversarial Label Noise
In adversarial classiﬁcation
tasks like spam ﬁltering and intrusion detection,
malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus,
besides achieving good classiﬁcation performances, machine learning algorithms have to be robust
against adversarial data manipulation to successfully operate in these tasks. While support vector
machines (SVMs) have shown to be a very successful approach in classiﬁcation problems, their
eﬀectiveness in adversarial classiﬁcation tasks has not been extensively investigated yet. In
this paper we present a preliminary investigation of the robustness of SVMs against
adversarial data manipulation. In particular, we assume that the adversary has control
over some training data, and aims to subvert the SVM learning process. Within this
assumption, we show that this is indeed possible, and propose a strategy to improve the
robustness of SVMs to training data manipulation based on a simple kernel matrix
Page last modified on Sun Nov 6 15:42:38 2011.