BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:v Tangent Kernels - Akshunna S. Dogra (Imperial College)
DTSTART:20240314T150000Z
DTEND:20240314T160000Z
UID:TALK210139@talks.cam.ac.uk
CONTACT:Nicolas Boulle
DESCRIPTION:Machine learning (ML) has been profitably leveraged across a w
 ide variety of problems in recent years. Empirical observations show that 
 ML models from suitable functional spaces are capable of adequately effici
 ent learning across a wide variety of disciplines. In this work (first in 
 a planned sequence of three)\, we build the foundations for a generic pers
 pective on ML model optimization and generalization dynamics. Specifically
 \, we prove that under variants of gradient descent\, “well-initialized
 ” models solve sufficiently well-posed problems at \\textit{a priori} or
  \\textit{in situ} determinable rates. Notably\, these results are obtaine
 d for a wider class of problems\, loss functions\, and models than the sta
 ndard mean squared error and large width regime that is the focus of conve
 ntional Neural Tangent Kernel (NTK) analysis. The $\\nu$ - Tangent Kernel 
 ($\\nu$TK)\, a functional analytic object reminiscent of the NTK\, emerges
  naturally as a key object in our analysis and its properties function as 
 the control for learning.\nWe exemplify the power of our proposed perspect
 ive by showing that it applies to diverse practical problems solved using 
 real ML models\, such as classification tasks\, data/regression fitting\, 
 differential equations\, shape observable analysis\, etc. We end with a sm
 all discussion of the numerical evidence\, and the role $\\nu$TKs may play
  in characterizing the search phase of optimization\, which leads to the 
 “well-initialized” models that are the crux of this work.
LOCATION:Centre for Mathematical Sciences\, MR14
END:VEVENT
END:VCALENDAR
