An Instability in Variational Methods for Learning Topic Models
- đ¤ Speaker: Andrea Montanari (Stanford University)
- đ Date & Time: Tuesday 16 January 2018, 09:00 - 09:45
- đ Venue: Seminar Room 1, Newton Institute
Abstract
Topic models are extremely useful to extract latent degrees of freedom form large unlabeled datasets. Variational Bayes algorithms are the approach most commonly used by practitioners to learn topic models. Their appeal lies in the promise of reducing the problem of variational inference to an optimization problem. I will show that, even within an idealized Bayesian scenario, variational methods display an instability that can lead to misleading results. [Based on joint work with Behroz Ghorbani and Hamid Javadi]
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Andrea Montanari (Stanford University)
Tuesday 16 January 2018, 09:00-09:45