BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Sparsity: Beyond L1 - Amar Shah (University of Cambridge)
DTSTART:20130418T140000Z
DTEND:20130418T153000Z
UID:TALK43637@talks.cam.ac.uk
CONTACT:Colorado Reed
DESCRIPTION:The L1 norm is a popular choice of regularizer when learning a
  mapping from inputs to outputs under a frequentist framework. This norm h
 as the useful property of being convex and promoting sparsity. Sparse solu
 tions are desirable when you believe that only a subset of the input featu
 res are required to generate an output e.g. when the input dimension is mu
 ch larger than the number of data points. A significant body of research i
 s dedicated to studying when such regularizers work and where they do not.
  These ideas can be extended to group or structured sparsity\, which can b
 e used to sparse multi kernel learning and non-linear variable selection. 
 Finally we shall discuss sparse matrices developed by generalizing the L1 
 norm in particular ways\, including sparse PCA and dictionary learning.
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
