BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Discrete Causal Representation Learning - Yuqi Gu (Columbia Univer
 sity)
DTSTART:20260305T140000Z
DTEND:20260305T144500Z
UID:TALK244417@talks.cam.ac.uk
DESCRIPTION:Causal representation learning seeks to uncover causal relatio
 nships among high-level latent variables from low-level\, entangled\, and 
 noisy observations. Existing approaches often either rely on deep neural n
 etworks\, which lack interpretability and formal guarantees\, or impose re
 strictive assumptions like linearity\, continuous-only observations\, and 
 strong structural priors. These limitations particularly challenge applica
 tions with a large number of discrete latent variables and mixed-type obse
 rvations. To address these challenges\, we propose discrete causal represe
 ntation learning\, a generative framework that models a directed acyclic g
 raph among discrete latent variables\, along with a sparse bipartite graph
  linking latent and observed layers. This design accommodates continuous\,
  count\, and binary responses through flexible measurement models while ma
 intaining interpretability. Under mild conditions\, we prove both the bipa
 rtite measurement graph and the latent causal graph are identifiable. We f
 urther propose a three-stage estimate-resample-discovery pipeline: penaliz
 ed estimation of the generative model parameters\, resampling of latent co
 nfigurations from the fitted model\, and score-based causal discovery on t
 he resampled latents. We establish the consistency of this procedure\, ens
 uring reliable latent causal structure recovery. Empirical studies on educ
 ational assessment and synthetic image data demonstrate that discrete caus
 al representation learning recovers sparse and interpretable latent causal
  structures.
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
