BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Machine Learning and Dynamical Systems Meet in Reproducing Kernel 
 Hilbert Spaces with Insights from Algorithmic Information Theory - Boumedi
 ene Hamzi
DTSTART:20240425T140000Z
DTEND:20240425T150000Z
UID:TALK215533@talks.cam.ac.uk
CONTACT:Matthew Colbrook
DESCRIPTION:Since its inception in the 19th century\, through the efforts 
 of Poincaré and Lyapunov\, the theory of dynamical systems has addressed 
 the qualitative behavior of systems as understood from models. From this p
 erspective\, modeling dynamical processes in applications demands a detail
 ed understanding of the processes to be analyzed. This understanding leads
  to a model\, which approximates observed reality and is often expressed b
 y a system of ordinary/partial\, underdetermined (control)\, deterministic
 /stochastic differential or difference equations. While these models are v
 ery precise for many processes\, for some of the most challenging applicat
 ions of dynamical systems\, such as climate dynamics\, brain dynamics\, bi
 ological systems\, or financial markets\, developing such models is notabl
 y difficult. On the other hand\, the field of machine learning is concerne
 d with algorithms designed to accomplish specific tasks\, whose performanc
 e improves with more data input. Applications of machine learning methods 
 include computer vision\, stock market analysis\, speech recognition\, rec
 ommender systems\, and sentiment analysis in social media. The machine lea
 rning approach is invaluable in settings where no explicit model is formul
 ated\, but measurement data are available. This is often the case in many 
 systems of interest\, and the development of data-driven technologies is i
 ncreasingly important in many applications. The intersection of the fields
  of dynamical systems and machine learning is largely unexplored\, and the
  objective of this talk is to show that working in reproducing kernel Hilb
 ert spaces offers tools for a data-based theory of nonlinear dynamical sys
 tems.\n\nIn the first part of the talk\, we introduce simple methods to le
 arn surrogate models for complex systems. We present variants of the metho
 d of Kernel Flows as simple approaches for learning the kernel that appear
  in the emulators we use in our work. First\, we will discuss the method o
 f parametric and nonparametric kernel flows for learning chaotic dynamical
  systems. We’ll also explore learning dynamical systems from irregularly
  sampled time series and from partial observations. We will introduce the 
 methods of Sparse Kernel Flows and Hausdorff-metric based Kernel Flows (HM
 KFs) and apply them to learn 132 chaotic dynamical systems. We draw parall
 els between Minimum Description Length (MDL) and Regularization in Machine
  Learning (RML)\, showcasing that the method of Sparse Kernel Flows offers
  a natural approach to kernel learning. By considering code lengths and co
 mplexities rooted in Algorithmic Information Theory (AIT)\, we demonstrate
  that data-adaptive kernel learning can be achieved through the MDL princi
 ple\, bypassing the need for cross-validation as a statistical method. Fin
 ally\, we extend the method of Kernel Mode Decomposition to design kernels
  in view of detecting critical transitions in some fast-slow random dynami
 cal systems.\n\nThen\, we introduce a data-based approach to estimating ke
 y quantities which arise in the study of nonlinear autonomous\, control\, 
 and random dynamical systems. Our approach hinges on the observation that 
 much of the existing linear theory may be readily extended to nonlinear sy
 stems – with a reasonable expectation of success - once the nonlinear sy
 stem has been mapped into a high or infinite dimensional Reproducing Kerne
 l Hilbert Space. We develop computable\, non-parametric estimators approxi
 mating controllability and observability energies for nonlinear systems. W
 e apply this approach to the problem of model reduction of nonlinear contr
 ol systems. It is also shown that the controllability energy estimator pro
 vides a key means for approximating the invariant measure of an ergodic\, 
 stochastically forced nonlinear system. Finally\, we show how kernel metho
 ds can be used to approximate center manifolds\, propose a data-based vers
 ion of the center manifold theorem\, and construct Lyapunov functions for 
 nonlinear ODEs.\n
LOCATION:Centre for Mathematical Sciences\, MR2
END:VEVENT
END:VCALENDAR
