Home Page





Editorial Board



Open Source Software



RSS Feed

Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes

Ohad Shamir, Tong Zhang
JMLR W&CP 28 (1) : 71–79, 2013


Stochastic Gradient Descent (SGD) is one of the simplest and most popular stochastic optimization methods. While it has already been theoretically studied for decades, the classical analysis usually required non-trivial smoothness assumptions, which do not apply to many modern applications of SGD with non-smooth objective functions such as support vector machines. In this paper, we investigate the performance of SGD without such smoothness assumptions, as well as a running average scheme to convert the SGD iterates to a solution with optimal optimization accuracy. In this framework, we prove that after \(T\) rounds, the suboptimality of the last SGD iterate scales as \(O(\log(T)/\sqrt{T}\)) for non-smooth convex objective functions, and \(O(\log(T)/T)\) in the non-smooth strongly convex case. To the best of our knowledge, these are the first bounds of this kind, and almost match the minimax-optimal rates obtainable by appropriate averaging schemes. We also propose a new and simple averaging scheme, which not only attains optimal rates, but can also be easily computed on-the-fly (in contrast, the suffix averaging scheme proposed in is not as simple to implement). Finally, we provide some experimental illustrations.

Related Material