BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Flamingo: a Visual Language Model for Few-Shot Learning - Antoine 
 Miech\, DeepMind
DTSTART:20221103T110000Z
DTEND:20221103T120000Z
UID:TALK191414@talks.cam.ac.uk
CONTACT:Panagiotis Fytas
DESCRIPTION:Building models that can be rapidly adapted to novel tasks usi
 ng only a handful of annotated examples is an open challenge for multimoda
 l machine learning research. In this talk\, I will introduce Flamingo\, a 
 family of Visual Language Models (VLM) with this ability. We propose key a
 rchitectural innovations to: (i) bridge powerful pretrained vision-only an
 d language-only models\, (ii) handle sequences of arbitrarily interleaved 
 visual and textual data\, and (iii) seamlessly ingest images or videos as 
 inputs. Thanks to their flexibility\, Flamingo models can be trained on la
 rge-scale multimodal web corpora containing arbitrarily interleaved text a
 nd images\, which is key to endow them with in-context few-shot learning c
 apabilities. We perform a thorough evaluation of our models\, exploring an
 d measuring their ability to rapidly adapt to a variety of image and video
  tasks. These include open-ended tasks such as visual question-answering\,
  where the model is prompted with a question which it has to answer\, capt
 ioning tasks\, which evaluate the ability to describe a scene or an event\
 , and close-ended tasks such as multiple-choice visual question-answering.
  For tasks lying anywhere on this spectrum\, a single Flamingo model can a
 chieve a new state of the art with few-shot learning\, simply by prompting
  the model with task-specific examples. On numerous benchmarks\, Flamingo 
 outperforms models fine-tuned on thousands of times more task-specific dat
 a.
LOCATION:GR04\, English Faculty Building\, 9 West Road\, Sidgwick Site
END:VEVENT
END:VCALENDAR
