BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Developing and evaluating data-driven models for co-speech gesture
 -synthesis - Taras-Svitozar Kucherenko\, KTH Sweden
DTSTART:20211104T110000Z
DTEND:20211104T120000Z
UID:TALK165517@talks.cam.ac.uk
CONTACT:Hatice Gunes
DESCRIPTION:Humans use non-verbal behaviors to signal their intent\, emoti
 ons\, and attitudes in human-human interactions. Embodied conversational a
 gents (ECA)\, therefore\, need this ability as well in order to make inter
 action pleasant and efficient. An essential part of non-verbal communicati
 on is gesticulation. The task of hand gesture generation has been initiall
 y approached mainly by rule-based methods\, but current state-of-the-art m
 ethods are data-driven\, and we continue this line of research.\nIn this t
 alk\, I will present my work addressing the two components of my thesis: g
 eneration and evaluation of co-speech gestures for embodied conversational
  agents.
LOCATION:SS03
END:VEVENT
END:VCALENDAR
