BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Feature learning and normalization layers - Dr Matus Telgarsky\, N
 YU
DTSTART:20240313T140000Z
DTEND:20240313T150000Z
UID:TALK206803@talks.cam.ac.uk
CONTACT:Dr Varun Jog
DESCRIPTION:Abstract:\nThe first half of this talk will describe the featu
 re learning problem in deep learning\noptimization\, its statistical conse
 quences\, and an approach to proving general theorems\nwith a heavy relian
 ce on normalization layers\, which are common to all modern\narchitectures
  but typically treated as an analytic nuisance. Theorems will cover\ntwo s
 ettings: concrete results for shallow networks\, and abstract template the
 orems for\ngeneral architectures.\n\nThe second half will survey proof tec
 hniques.  The two key ingredients are a careful\nnew mirror descent lemma\
 , derived from the work of Chizat and Bach\, and a new\ncharacterization o
 f common layer types called lower homogeneity.\n\nJoint work with Danny So
 n.\n\nBio:\nMatus Telgarsky is an assistant professor in the Courant Insti
 tute\, NYU\, specializing in deep learning theory.  He was fortunate to re
 ceive a PhD at UCSD under Sanjoy Dasgupta.  Other highlights include: co-f
 ounding\, in 2017\, the Midwest ML Symposium (MMLS) with Po-Ling Loh\; rec
 eiving a 2018 NSF CAREER award\; and organizing two Simons Institute progr
 ams\, one on deep learning theory (summer 2019)\, and one on generalizatio
 n (fall 2024\, again with Po-Ling Loh)\; having lots of good friends and t
 oo many fun things to do.\n
LOCATION:MR5\, CMS Pavilion A
END:VEVENT
END:VCALENDAR
