BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Do Deep Nets Really Need To Be Deep?  - Rich Caruana\, Microsoft R
 esearch 
DTSTART:20141003T090000Z
DTEND:20141003T100000Z
UID:TALK54938@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:Currently\, deep neural networks are the state of the art on p
 roblems such as speech recognition and computer vision. We show that by us
 ing a method called model compression that shallow feed-forward nets can l
 earn the complex functions previously learned by deep nets and achieve acc
 uracies previously only achievable with deep models. Moreover\, in some ca
 ses the shallow neural nets can learn these deep functions using the same 
 number of parameters as the original deep models. On the TIMIT phoneme rec
 ognition and CIFAR-10 image recognition tasks\, shallow nets can be traine
 d that perform similarly to complex\, well-engineered\, deeper convolution
 al architectures. Our success in training shallow neural nets to mimic dee
 per models suggests that there may be better algorithms for training shall
 ow nets than those currently available. I’ll also briefly discuss work w
 e’re doing to compress extremely large deep models and ensembles of deep
  models to “modest-size” deep models that fit on servers\, and to “s
 mall” deep models that run on mobile devices. 
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
