BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Redundancy in Deep Neural Networks and Its Impacts to Hardware Acc
 elerator Design - Jiang Su\, Imperial College London
DTSTART:20170724T100000Z
DTEND:20170724T110000Z
UID:TALK73731@talks.cam.ac.uk
CONTACT:Robert Mullins
DESCRIPTION:Hardware systems for neural networks are limited in their appl
 icability to power-constrained hardware environments as they are both high
 ly compute and memory intensive. As a result\, model-level redundancy appr
 oaches such as dropout\, pruning and parameter compression have been propo
 sed to increase classification accuracy and/or lower hardware complexity. 
 Additionally\, significant data-level redundancy of the weight parameters 
 has been consistently shown to produce comparable classification accuracy 
 to their floating point equivalent models. As a result\, there’s recentl
 y been a growing interest in networks with low-precision weight representa
 tions\, especially ones with only 1 or 2 bits. Such computational structur
 es significantly reduce the compute\, spatial complexity\, and memory foot
 print which ultimately improves their applicability to power-constrained a
 pplication scenarios.\n\nIn this talk\, these two levels of redundancy are
  introduced as well as their impacts to hardware system design. Some perso
 nal opinions about efficient deep neural network acceleration system desig
 n are finally proposed for more open discussion.
LOCATION:SW01\, Computer Laboratory
END:VEVENT
END:VCALENDAR
