BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:The Structure-Adaptive Acceleration of Stochastic Proximal Gradien
 t Algorithms - Junqi Tang\, University of Edinburgh
DTSTART:20190917T130000Z
DTEND:20190917T140000Z
UID:TALK129295@talks.cam.ac.uk
CONTACT:Jingwei Liang
DESCRIPTION:Stochastic gradient methods have become the de-facto technique
 s in data science\, signal processing and machine learning\, due to their 
 computational efficiency in large-scale optimization problems. Throughout 
 the past few years\, accelerated stochastic gradient algorithms are extens
 ively studied and developed\, which are not only excellent numerically\, b
 ut also worse-case optimal theoretically for convex and smooth objective f
 unctions. In many real-world applications\, we often consider composite op
 timization tasks where non-smooth regularizers are used for better estimat
 ion or generalization. Such regularizers usually enforce the solutions to 
 have low-dimensional structure\, such as sparsity\, group-sparsity\, low-r
 ank and piece-wise smoothness. In this talk\, we present structure-adaptiv
 e variants of randomized optimization algorithms\, including accelerated v
 ariance-reduced SGD\, and accelerated proximal coordinate descent\, for mo
 re efficiently solving large-scale composite optimization problems. These 
 algorithms are tailored to exploit the low-dimensional structure of the so
 lution\, by judiciously designed restart schemes according to restricted s
 trong-convexity property of the objective function due to non-smooth regul
 arization. The convergence analysis demonstrates that our approach leads t
 o provably improved iteration complexity\, while we also validate the effi
 ciency of our algorithms numerically on large-scale sparse regression task
 s.
LOCATION:MR 14\, Centre for Mathematical Sciences
END:VEVENT
END:VCALENDAR
