
In this thesis, a dynamic theory of learning, alsocalled ``online learning'' in computer science, is presented as stochastic approximations of theregression function from reproducing kernel Hilbertspaces (RKHS). It starts from a probability measureon an input-output space, with sequential sampling inan independent and identically distributed way.Online learning algorithms recursively exploitsamples as a departure from the ``batch learning''which has an access to all data once. The algorithmsare based on stochastic approximations of theregression function from RKHS. Novel probabilisticexponential inequalities in Hilbert spaces fromRussian school are exploited to study some martingaleor reverse martingale expansions of the error. Tightprobabilistic upper bounds are obtained in the sensethat in certain range of complexity classes, onlinelearning algorithms achieve the same convergencerates as batch learning, and thus asymptoticallyreach the optimal rates in some senses.
Page Count:
108
Publication Date:
2008-10-29
No comments yet. Be the first to share your thoughts!