Adaptation in Online Learning through Dimension-Free Exponentiated Gradient

Data dell'evento: 
Giovedì, 19 Dicembre, 2013 - 14:30
Luogo: 
Aula Magna
Speaker: 
Prof. Francesco Orabona, TTI Chicago, USA
Abstract

As the big data paradigm is gaining momentum, learning algorithms trained through fast stochastic gradient descent methods are becoming the de-facto standard in the industry world. Still, even these simple procedures cannot be used completely "off-the-shelf" because parameters, e.g. the learning rate, has to be properly tuned to the particular problem to achieve fast convergence.

The online learning framework is a powerful tool to design fast learning algorithms able to work in both the stochastic and adversarial setting.
In this talk I will introduce new advancements in the time-varying regularization framework for online learning, that allows to derive almost parameter-free adaptive algorithms. In particular, I will focus on a new algorithm based on a dimension-free exponentiated gradient. Contrary to the existing online algorithms, it achieves an optimal regret bound, up to logarithmic terms, without any parameter nor any prior knowledge about the optimal solution.

Contatto: 
caputo@dis.uniroma1.it