BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Spoken Language Understanding\, with and without Pre-training - Ka
 ren Livescu\, TTI-Chicago
DTSTART:20220303T150000Z
DTEND:20220303T160000Z
UID:TALK170846@talks.cam.ac.uk
CONTACT:Marinela Parovic
DESCRIPTION:Spoken language understanding (SLU) tasks involve mapping from
  speech audio signals to semantic labels. Given the complexity of such tas
 ks\, good performance might be expected to require large labeled datasets\
 , which are difficult to collect for each new task and domain.  Recent wor
 k on self-supervised speech representations has made it feasible to consid
 er learning SLU models with limited labeled data\, but it is not well unde
 rstood what pre-trained models learn and how best to apply them to downstr
 eam tasks. In this talk I will describe recent work that (1) begins to bui
 ld a better understanding of the information learned by pre-trained speech
  models and (2) explores a spoken language understanding task\, spoken nam
 ed entity recognition\, with limited labeled data.  Along the way we also 
 explore the question of how access to a speech recognizer helps (or doesn'
 t help) spoken NER\, as well as other ways of improving low-resource spoke
 n NER other than using pre-trained models.
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
