Some topics at the intersection of control, dynamics, and learning.
- 👤 Speaker: Eduardo Sontag, Northeastern University 🔗 Website
- 📅 Date & Time: Thursday 01 May 2025, 14:00 - 15:00
- 📍 Venue: JDB Seminar Room, Department of Engineering and online (Zoom)
Abstract
Data-driven modeling typically involves simplifications of systems through dimensionality reduction (less variables) or through dimensionality enlargement (more variables, but simpler, perhaps linear, dynamics). Autoencoders with narrow bottleneck layers are a typical approach to the former (allowing the discovery of dynamics taking place in a lower-dimensional manifold), while autoencoders with wide layers provide an approach to the later, with “neurons” in these layers thought of as “observables” in Koopman representations. In the first part of this talk, I’ll briefly discuss some theoretical results about each of these topics. (Joint work with M.D. Kvalheim on dimension reduction and with Z. Liu and N. Ozay on Koopman representations.)
The training of autoencoders, and more generally the solution of other optimization problems, including policy optimization in reinforcement learning, typically relies upon some variant of gradient descent. There has been much recent work in the machine learning, control, and optimization communities in the application of the Polyak-Łojasiewicz Inequality (PŁI) to such problems in order to establish exponential (a.k.a. “linear” in the local-iteration language of numerical analysis) convergence of loss functions to their minima under the gradient flow. A somewhat surprising fact is that the exponential rate, at least in the continuous-time LQR problem, vanishes for large initial conditions, resulting in a mixed globally linear / locally exponential behavior. This is in sharp contrast with the discrete-time LQR problem, where there is global exponential convergence. The gap between CT and DT behaviors motivated our work on generalizations of the PŁI condition, and the second part of the talk will address that topic. In fact, these generalizations are key to understanding the effect of errors in the estimation of the gradient. Such errors might arise from adversarial attacks, wrong evaluation by an oracle, early stopping of a simulation, inaccurate and very approximate digital twins, stochastic computations (algorithm “reproducibility”), or learning by sampling from limited data. We will suggest an input to state stability (ISS) analysis of this issue. Time permitting, we will also mention some initial results on the performance of linear feedforward networks in feedback control. (Joint work with A.C.B. de Oliveira, L. Cui, Z.P. Jiang, and M. Siami).
The seminar will be held in JDB Seminar Room , Department of Engineering, and online (zoom): https://newnham.zoom.us/j/92544958528?pwd=YS9PcGRnbXBOcStBdStNb3E0SHN1UT09
Series This talk is part of the CUED Control Group Seminars series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Computational Continuum Mechanics Group Seminars
- CUED Control Group Seminars
- Featured lists
- Information Engineering Division seminar list
- Interested Talks
- JDB Seminar Room, Department of Engineering and online (Zoom)
- ndk22's list
- ob366-ai4er
- Probabilistic Systems, Information, and Inference Group Seminars
- rp587
- School of Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)



Thursday 01 May 2025, 14:00-15:00