BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Diffusion Language Models - Julianna Piskorz (University of Cambri
 dge)
DTSTART:20251029T110000Z
DTEND:20251029T123000Z
UID:TALK240115@talks.cam.ac.uk
CONTACT:Xianda Sun
DESCRIPTION:Diffusion Language Models (DLMs) have recently emerged as a pr
 omising alternative to the dominant autoregressive paradigm for text gener
 ation. Unlike GPT-style models that generate text sequentially\, DLMs empl
 oy an iterative denoising process that starts from a noisy representation 
 and progressively reconstructs coherent text\, enabling parallel token gen
 eration and bidirectional context modelling. While early research explored
  both continuous and discrete diffusion formulations for text\, the discre
 te masked diffusion objectives have recently dominated the landscape\, all
 owing DLMs to effectively scale to larger model sizes and achieve competit
 ive perplexity on standard benchmarks\, as evidenced by models such as LLa
 DA\, Mercury\, Gemini Diffusion or Seed Diffusion. MDLMs have attracted a 
 lot of attention for their potential to speed up inference through paralle
 l decoding and improve controllability. In this talk\, we will explore the
  formulation of DLMs\, discuss their potential advantages and shortcomings
 \, and dive into the most recent and influential works in this new area.
LOCATION:Cambridge University Engineering Department\, CBL Seminar room BE
 4-38.
END:VEVENT
END:VCALENDAR
