University of Cambridge > Talks.cam > Language Technology Lab Seminars > Towards Knowledge-Robust and Multimodally-Grounded NLP

Towards Knowledge-Robust and Multimodally-Grounded NLP

Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic .

In this talk, I will present our group’s work on NLP models that are knowledge-robust and multimodally-grounded. First, we will describe multi-task and reinforcement learning methods to incorporate novel auxiliary-skill tasks such as saliency, entailment, and back-translation validity (including bandit-based methods for automatic auxiliary task selection+mixing and multi-reward mixing). Next, we will discuss developing adversarial robustness against reasoning shortcuts, missing commonsense gaps, and cross-domain/lingual generalization in QA and dialogue models (including auto-adversary generation). Lastly, we will discuss multimodally-grounded models which condition and reason on dynamic spatio-temporal information in images and videos, and action-based robotic navigation and assembling tasks (including commonsense reasoning for ambiguous robotic instructions).

This talk is part of the Language Technology Lab Seminars series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Š 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity