Gradient methods for huge-scale optimization problems
- đ¤ Speaker: Yurii Nesterov, Universite catholique de Louvain
- đ Date & Time: Tuesday 27 May 2014, 17:30 - 18:30
- đ Venue: Cambridge University Engineering Department, LR6
Abstract
We consider a new class of huge-scale problems, the problems with sparse gradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of the iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest gradient methods. Similar results can be obtained for a new non-smooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments and discuss extensions of this technique.
Series This talk is part of the CUED Control Group Seminars series.
Included in Lists
This talk is not included in any other list.
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Yurii Nesterov, Universite catholique de Louvain
Tuesday 27 May 2014, 17:30-18:30