Variational Bayes as Surrogate Regression
- π€ Speaker: Will Tebbutt (University of Cambridge)
- π Date & Time: Wednesday 10 February 2021, 11:00 - 12:30
- π Venue: https://eng-cam.zoom.us/j/86068703738?pwd=YnFleXFQOE1qR1h6Vmtwbno0LzFHdz09
Abstract
Variational Bayes is a useful approximate inference framework in which an intractable posterior distribution is approximated by simpler tractable one. The extent to which this is useful (usually) depends on how closely this approximation matches reality, and how quickly it can be obtained. We’ll present lines of work that utilise the posteriors of tractable models as this approximation, and the interesting inference algorithms that arise in this setting.
Although we’ll cover all of these in the presentation, it will be helpful to have some familiarity with the basics of variational Bayes (e.g. what the ELBO is), variational autoencoders and the idea of amortised inference, exponential families, and Gaussian processes. A basic understanding of natural gradients would also be helpful, but is not essential.
If you have the time, please read this: Opper, Manfred, and CΓ©dric Archambeau. “The variational Gaussian approximation revisited.” Neural computation 21.3 (2009): 786-792.
Extra reading if you have time on your hands:- Bui, Thang D., et al. “Partitioned variational inference: A unified framework encompassing federated and continual learning.” arXiv preprint arXiv:1811.11206 (2018).
- Ashman, Matthew, et al. “Sparse Gaussian Process Variational Autoencoders.” arXiv preprint arXiv:2010.10177 (2020). Khan, Mohammad Emtiyaz, and Didrik Nielsen. “Fast yet simple natural-gradient descent for variational inference in complex models.” 2018 International Symposium on Information Theory and Its Applications (ISITA). IEEE , 2018.
- Chang, Paul E., et al. “Fast variational learning in state-space Gaussian process models.” 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE , 2020.
- Johnson, Matthew James, et al. “Composing graphical models with neural networks for structured representations and fast inference.” Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016.
Series This talk is part of the Machine Learning Reading Group @ CUED series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Computational Continuum Mechanics Group Seminars
- custom
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- https://eng-cam.zoom.us/j/86068703738?pwd=YnFleXFQOE1qR1h6Vmtwbno0LzFHdz09
- Inference Group Journal Clubs
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Machine Learning Reading Group
- Machine Learning Reading Group @ CUED
- Machine Learning Summary
- ML
- ndk22's list
- ob366-ai4er
- Quantum Matter Journal Club
- Required lists for MLG
- rp587
- School of Technology
- Simon Baker's List
- TQS Journal Clubs
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 10 February 2021, 11:00-12:30