BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Causal Machine Learning - Wenlin Chen\, Julien Horwood &amp\; Juye
 on Heo (University of Cambridge)
DTSTART:20230322T110000Z
DTEND:20230322T123000Z
UID:TALK198772@talks.cam.ac.uk
CONTACT:James Allingham
DESCRIPTION:In the first portion of the talk\, we introduce some fundament
 al notions in causal inference that serve as a foundation for causal machi
 ne learning. We discuss the relationships between Markov properties\, fait
 hfulness\, and the correspondence between conditional independencies in ca
 usal graphs and observational data. We then illustrate how these principle
 s have direct applications in machine learning via half-sibling regression
  and invariant causal prediction. \n\nIn the second portion of the talk\, 
 we discuss causal structure learning\, which aims to find causal relations
  between variables from observational data or a mixture of observational a
 nd experimental data for robustness and generalizability. We introduce thr
 ee categories of causal structure learning and dive deep into ‘neural’
  causal structure learning using gradient-based optimization for scalable 
 causal discovery. \n\nIn real-world problems with structured data\, the sy
 mbolic variables connected in a causal graph are not provided apriori. In 
 the third portion of the talk\, we introduce causal representation learnin
 g\, which aims to learn the symbols required by causal inference/discovery
  from structured data\, which resembles machine learning going beyond symb
 olic AI. We discuss why unsupervised causal representation learning is cha
 llenging and present a recently proposed causal representation learning me
 thod based on identifiable deep generative models.\n\nReferences (recommen
 ded reading\, not required):\n\nPeters\, Jonas\, Peter Bühlmann\, and Nic
 olai Meinshausen. 2015. “Causal Inference Using Invariant Prediction: Id
 entification and Confidence Intervals.” arXiv. https://doi.org/10.48550/
 arXiv.1501.01332.\n\nZheng\, Xun\, et al. "Dags with no tears: Continuous 
 optimization for structure learning." Advances in neural information proce
 ssing systems 31 (2018).\n\nVowels\, Matthew J.\, Necati Cihan Camgoz\, an
 d Richard Bowden. "D’ya like dags? a survey on structure learning and ca
 usal discovery." ACM Computing Surveys 55.4 (2022): 1-36\n\nSchölkopf\, B
 .\, Locatello\, F.\, Bauer\, S.\, Ke\, N. R.\, Kalchbrenner\, N.\, Goyal\,
  A.\, & Bengio\, Y. (2021). Toward causal representation learning. Proceed
 ings of the IEEE\, 109(5)\, 612-634.\n\nLocatello\, F.\, Bauer\, S.\, Luci
 c\, M.\, Raetsch\, G.\, Gelly\, S.\, Schölkopf\, B.\, & Bachem\, O. (2019
 \, May). Challenging common assumptions in the unsupervised learning of di
 sentangled representations. In international conference on machine learnin
 g (pp. 4114-4124). PMLR.\n\nLu\, C.\, Wu\, Y.\, Hernández-Lobato\, J. M.\
 , & Schölkopf\, B. (2021). Invariant causal representation learning for o
 ut-of-distribution generalization. In International Conference on Learning
  Representations.
LOCATION:Cambridge University Engineering Department\, CBL Seminar room BE
 4-38.
END:VEVENT
END:VCALENDAR
