In this talk, we discuss the online variants of the classical Frank-Wolfe algorithm. We consider minimizing the regret with a stochastic cost. Our online algorithms only require simple iterative updates and a non-adaptive step size rule, in contrast to the hybrid schemes commonly considered in the literature. The proofs rely on bounding the duality gaps of the online algorithms. Several novel results are derived. The regret bound and anytime optimality for a strongly convex stochastic cost are shown to be as fast as O(log3T/T) and O(\log2T/T), respectively, where T is the number of rounds played. We apply these results to online low rank matrix completion problems, with exponential family noise.