Institutional tag:

In this talk, I will consider the online learning problem where a convex function is minimized observing recursively the gradients. I will introduce SAEW, a new procedure that accelerates exponential weights procedures with the slow rate 1/sqrt(T) to procedures achieving the fast rate 1/T. Under the strong convexity of the risk, it achieves the optimal rate of convergence for approximating sparse parameters in R^d. The acceleration is achieved by using successive averaging steps in an online fashion. The procedure also produces sparse estimators thanks to additional hard threshold steps.

Dates:

Friday, February 10, 2017 - 11:00

Location:

Inria, room A00

Speaker(s):

Pierre Gaillard

Affiliation(s):

Inria Sierra

Speaker's URL:

Presented work's URL: