BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Towards explainable fact checking - Isabelle Augenstein (Universit
 y of Copenhagen)
DTSTART:20200619T110000Z
DTEND:20200619T120000Z
UID:TALK142264@talks.cam.ac.uk
CONTACT:Guy Aglionby
DESCRIPTION:Automatic fact checking is one of the more involved NLP tasks 
 currently researched: not only does it require sentence understanding\, bu
 t also an understanding of how claims relate to evidence documents and wor
 ld knowledge. Moreover\, there is still no common understanding in the aut
 omatic fact checking community of how the subtasks of fact checking — cl
 aim check-worthiness detection\, evidence retrieval\, veracity prediction 
 — should be framed. This is partly owing to the complexity of the task\,
  despite efforts to formalise the task of fact checking through the develo
 pment of benchmark datasets.\nThe first part of the talk will be on automa
 tically generating textual explanations for fact checking\, thereby exposi
 ng some of the reasoning processes these models follow. The second part of
  the talk will be on re-examining how claim check-worthiness is defined\, 
 and how check-worthy claims can be detected\; followed by how to automatic
 ally generate claims which are hard to fact-check automatically.\n\n\nBio:
 \n\nIsabelle Augenstein is an associate professor in Natural Language Proc
 essing and Machine Learning at the University of Copenhagen\, Department o
 f Computer Science\, where she head the Copenhagen NLU research group. Her
  main research interests are weakly supervised and low-resource learning w
 ith applications including fact checking\, question answering and cross-li
 ngual learning.\n
LOCATION:https://meet.google.com/xkv-cako-arr
END:VEVENT
END:VCALENDAR
