BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:&quot\;Delay-independent incremental global convergence in monoton
 e systems of delay differential equations&quot\; and &quot\;(Non)-converge
 nce of the gradient method to saddle points of concave-convex functions&qu
 ot\; - Eoin Devane and Tom Holding (CCA)
DTSTART:20141119T160000Z
DTEND:20141119T170000Z
UID:TALK55630@talks.cam.ac.uk
CONTACT:Eavan Gleeson
DESCRIPTION:This week we have a pair of short talks that will be given at 
 the 53rd IEEE Conference on Decision and Control later this year.\n\nTalk 
 1\; Delay-independent incremental global convergence in monotone systems o
 f delay differential equations (Eoin Devane)\n\nMonotone systems generated
  by delay differential equations with explicit time-variation are of impor
 tance in the modeling of a number of significant practical problems\, incl
 uding the analysis of communication systems and consensus protocols. In su
 ch problems\, it is often of importance to be able to guarantee delay-inde
 pendent incremental global convergence\, whereby all solutions converge to
 wards each other asymptotically. Such guarantees allow the asymptotic prop
 erties of all trajectories of the system to be determined by simply studyi
 ng those of some particular convenient solution. A class of these systems 
 was recently studied in the context of wireless networks through the impos
 ition of a scalability condition. In this work\, we seek to weaken the not
 ions of monotonicity and system structure that were employed in that setti
 ng so as to extend this analysis to a more general context and make explic
 it exactly which system properties are required. We show that a generalise
 d two-sided scalability condition\, which is weaker than the separate assu
 mptions of quasimonotonicity and scalability\, can be used to express the 
 system properties that are needed to prove delay-independent global conver
 gence. Furthermore\, we obtain as a corollary a result of guaranteed conve
 rgence of all solutions to a quantifiable invariant set\, enabling time in
 variant asymptotic bounds to be obtained for the trajectories even if the 
 precise values of time-varying parameters are unknown.\n\n\nTalk 2: (Non)-
 convergence of the gradient method to saddle points of concave-convex func
 tions (Tom Holding)\n\nThe problem of finding the saddle point of a concav
 e (in x for fixed y) convex (in y for fixed x) function f(x\,y) is ubiquit
 ous in optimization and numerical analysis. These arise\, for example\, fr
 om the Lagrangian of an optimization problem. In the 50's Arrow and Hurwic
 z proposed a first order gradient ascent/descent method to solve this prob
 lem\, and proved convergence to a saddle point in the case of strict conca
 vity. In the past two decades this simple method has been rediscovered in 
 the context of network optimization as it gives rise to localised update r
 ules. However\, in many of these problems the function is not strictly con
 cave and oscillatory behaviour has been observed in practice. While higher
  order methods are available\, these are more complicated to implement\, a
 nd\, in the case of network optimization problems\, can prevent distribute
 d computation by requiring additional information transfer between nodes.\
 n\nIn this talk I shall discuss my recent work on classifying the asymptot
 ic behaviour of the gradient method on an arbitrary concave-convex functio
 n. Despite the the non-linearity of the dynamics\, it is shown that soluti
 ons of the gradient method converge to solutions of an explicit linear ODE
  specified by only local information at the saddle point. Exact characteri
 sation is given for special cases of optimization problems\, and modificat
 ion methods are presented. The proof uses simple geometric arguments which
  will be illustrated with pictures.
LOCATION:MR14\, Centre for Mathematical Sciences
END:VEVENT
END:VCALENDAR
