BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Don't multiply lightly: exploring how DNN depth interacts with HMM
  independence assumptions in hybrid HMM/DNN's used for ASR - Steven Wegman
 n\, Semantic Machines and International Computer Science Institute (ICSI)
DTSTART:20160704T110000Z
DTEND:20160704T120000Z
UID:TALK66684@talks.cam.ac.uk
CONTACT:46953
DESCRIPTION:While hybrid hidden Markov model/neural network (HMM/DNN) acou
 stic models have replaced HMM/GMMs in automatic speech recognition (ASR) d
 ue to performance improvements\, the HMM's conditional independence assump
 tions are still unrealistic. In this work we explore the extent to which t
 he depth of neural networks helps compensate for these poor conditional in
 dependence assumptions. Using a resampling framework that allows us to con
 trol the amount of data dependence in the test set - while still using rea
 l observations from the data - we can determine how robust neural networks
 \, and particularly deeper models\,\nare to data dependence. Our conclusio
 ns are that if the data were to match the conditional independence assumpt
 ions of the HMM\, there would be little benefit from using deeper models. 
 It is only when data become more dependent that depth improves ASR perform
 ance. That performance substantially degrades\, however\, as the data beco
 mes more realistic suggests that better temporal modeling is still needed 
 for ASR.  This is joint work with Suman Ravuri.
LOCATION:James Dyson Building Seminar Room - Department of Engineering
END:VEVENT
END:VCALENDAR
