BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Some topics at the intersection of control\, dynamics\, and learni
 ng. - Eduardo Sontag\, Northeastern University
DTSTART:20250501T130000Z
DTEND:20250501T140000Z
UID:TALK231442@talks.cam.ac.uk
CONTACT:Fulvio Forni
DESCRIPTION:Data-driven modeling typically involves simplifications of sys
 tems through dimensionality reduction (less variables) or through dimensio
 nality enlargement (more variables\, but simpler\, perhaps linear\, dynami
 cs).  Autoencoders with narrow bottleneck layers are a typical approach to
  the former (allowing the discovery of dynamics taking place in a lower-di
 mensional manifold)\, while autoencoders with wide layers provide an appro
 ach to the later\, with "neurons" in these layers thought of as "observabl
 es" in Koopman representations. In the first part of this talk\, I'll brie
 fly discuss some theoretical results about each of these topics. (Joint wo
 rk with M.D. Kvalheim on dimension reduction and with Z. Liu and N. Ozay o
 n Koopman representations.)\n\nThe training of autoencoders\, and more gen
 erally the solution of other optimization problems\, including policy opti
 mization in reinforcement learning\, typically relies upon some variant of
  gradient descent. There has been much recent work in the machine learning
 \, control\, and optimization communities in the application of the Polyak
 -Łojasiewicz Inequality (PŁI) to such problems in order to establish exp
 onential (a.k.a. “linear” in the local-iteration language of numerical
  analysis) convergence of loss functions to their minima under the gradien
 t flow. A somewhat surprising fact is that the exponential rate\, at least
  in the continuous-time LQR problem\, vanishes for large initial condition
 s\, resulting in a mixed globally linear / locally exponential behavior. T
 his is in sharp contrast with the discrete-time LQR problem\, where there 
 is global exponential convergence. The gap between CT and DT behaviors mot
 ivated our work on generalizations of the PŁI condition\, and the second 
 part of the talk will address that topic. In fact\, these generalizations 
 are key to understanding the effect of errors in the estimation of the gra
 dient. Such errors might arise from adversarial attacks\, wrong evaluation
  by an oracle\, early stopping of a simulation\, inaccurate and very appro
 ximate digital twins\, stochastic computations (algorithm "reproducibility
 ")\, or learning by sampling from limited data. We will suggest an input t
 o state stability (ISS) analysis of this issue. Time permitting\, we will 
 also mention some initial results on the performance of linear feedforward
  networks in feedback control.  (Joint work with A.C.B. de Oliveira\, L. C
 ui\, Z.P. Jiang\, and M. Siami).\n\nThe seminar will be held in JDB Semina
 r Room \, Department of Engineering\, and online (zoom): https://newnham.z
 oom.us/j/92544958528?pwd=YS9PcGRnbXBOcStBdStNb3E0SHN1UT09
LOCATION:JDB Seminar Room\, Department of Engineering and online (Zoom)
END:VEVENT
END:VCALENDAR
