Tracking Linear-threshold Concepts with Winnow
Chris Mesterharm; 4(Oct):819-838, 2003.
Abstract
In this paper, we give a mistake-bound for learning arbitrary
linear-threshold concepts that are allowed to change over time in
the on-line model of learning. We use a variation of the Winnow
algorithm and show that the bounds for learning shifting
linear-threshold functions have many of the same advantages that the
traditional Winnow algorithm has on fixed concepts. These benefits
include a weak dependence on the number of irrelevant attributes,
inexpensive runtime, and robust behavior against noise. In fact, we
show that the bound for tracking Winnow has even better performance
with respect to irrelevant attributes. Let
X∈[0,1]
n be an
instance of the learning problem. In the previous bounds, the
number of mistakes depends on ln
n. In this paper, the shifting
concept bound depends on max ln(||
X||
1). We show that
this behavior is a result of certain parameter choices in the
tracking version of Winnow, and we show how to use related
parameters to get a similar mistake bound for the traditional
fixed concept version of Winnow.
[abs][pdf][ps.gz][ps]