Uniform TitleImproving on-line learning
NameMesterharm, Chris (author), Hirsh, Haym (chair), Littman, Michael (internal member), Steiger, William (internal member), Schapire, Robert (outside member), Rutgers University, Graduate School - New Brunswick,
Computational learning theory
DescriptionIn this dissertation, we consider techniques to improve the performance and applicability of algorithms used for on-line learning. We organize these techniques according to the assumptions they make about the generation of instances. Our first assumption is that the
instances are generated by a fixed distribution. Many algorithms are designed to perform well when instances are generated by an adversary;
we give two techniques to modify these algorithms to improve performance when the instances are instead generated by a distribution. We validate these techniques with extensive experiments using a wide range of real world data sets. Our second assumption is
that the target concept the algorithm is attempting to learn changes over time. We give a modification of the Winnow algorithm and show it
has good bounds for tracking a shifting concept when instances are generated by an adversary. We also consider the case that the instances are generated by a shifting distribution. We apply
variations of the previous fixed distribution techniques and show, with real data derived experiments, that these techniques continue to
significantly improve performance. Last, we assume that the labels for instances may be delayed for a number of trials. We give techniques to modify an on-line algorithm so that it has good performance even when the labels are delayed. We derive upper-bounds on the performance of these modifications and show through lower-bounds that these modifications are close to optimal.
NoteIncludes bibliographical references (p. 284-290).
CollectionGraduate School - New Brunswick Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.