BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Improving Model Robustness for Natural Language Inference  - Joe S
 tacey (Imperial College London)
DTSTART:20230428T110000Z
DTEND:20230428T120000Z
UID:TALK200017@talks.cam.ac.uk
CONTACT:Michael Schlichtkrull
DESCRIPTION:Abstract: \n\nNatural Language Inference (NLI) models are know
 n to learn from biases within their training data\, impacting how well the
  models generalise to other unseen datasets. Most methods to improve model
  robustness focus on preventing models learning from these biases\, which 
 can result in restrictive models and lower performance. We explore a range
  of alternative techniques to improve model robustness\, including trainin
 g models with human explanations\, introducing a new logical reasoning fra
 mework\, and generating domain-targeted data using GPT3. We measure robust
 ness by training models on SNLI and testing performance on MNLI\, a challe
 nging robustness setting where most prior work shows limited improvements.
 \n\nBio: \n\nJoe is a 3rd year PhD student at Imperial College London supe
 rvised by Marek Rei. His research focuses on creating more robust NLP mode
 ls that generalise better to unseen\, out-of-distribution datasets. Joe is
  a recipient of the 2023 Apple Scholars in AI/ML PhD fellowship. 
LOCATION:Computer Lab\, FW26
END:VEVENT
END:VCALENDAR
