BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Investigating the interdependencies between DNN architectures and 
 optimisation methods for Large Vocabulary Speech Continuous Speech Recogni
 tion - Adnan Haider\, University of Cambridge
DTSTART:20161031T120000Z
DTEND:20161031T130000Z
UID:TALK68740@talks.cam.ac.uk
CONTACT:Anton Ragni
DESCRIPTION:The problem of Large Vocabulary Continuous Speech Recognition 
 (LVCSR) can be cast as a general problem of supervised learning where give
 n some seen examples\, the task is to learn the relationship between the i
 nput space and the output space from the data. Ideally\, we wish to choose
  a prediction function that avoids rote memorization and instead generaliz
 es the concepts that can be learned from a given set of utterances. This i
 nvolves choosing an appropriate prediction function from a space of predic
 tive functions that minimizes a risk measure over an adequately selected f
 amily of prediction functions. In practice\, rather than consider a variat
 ional optimization problem over a generic family of prediction functions\,
  we assume that the prediction function has a fixed form which in the cont
 ext of LVCSR corresponds to Hybrid HMM-Deep Neural Networks models of diff
 erent network topologies. The focus of this work is to investigate an effe
 ctive coupling between the use of various optimisation methods and network
  topologies for effective training of large hours of speech data. In this 
 work\, we will particularly investigate optimisation methods that try to c
 ombine the best properties of batch and stochastic algorithms while making
  careful considerations towards the computational time and number of updat
 es.
LOCATION:Department of Engineering - LR3A
END:VEVENT
END:VCALENDAR
