Herding or a '3rd way to learn'
- đ¤ Speaker: Simon Lacoste-Julien and Ferenc Huszar
- đ Date & Time: Thursday 21 October 2010, 14:00 - 15:30
- đ Venue: Engineering Department, CBL Room 438
Abstract
In this RCC , we will cover a new learning approach quite recently proposed (2009) by Max Welling and peculiarly called “Herding”. Even though this is brand new research, it contains interesting links with standard machine learning concepts that we will review. Also, we will split the RCC in two parts which will show the interesting evolution typical in research from the original idea (with the first paper) to a more modern viewpoint (with its latest paper).
“Herding” is a novel approach to learning which seems a bit peculiar at first sight but has interesting properties and actually good empirical performance. It can be seen as a ‘3rd way to learn’: the first one being traditional frequentist point estimates of parameters and the second using Bayesian posterior over parameters. Herding lies somewhat in between: rather than maintaining samples over parameters as in Bayesian learning, it navigates the parameter space in a deterministic (but almost chaotic) way which don’t converge to any particular point estimate. During this exploration, it produces pseudosamples which can be used in a similar fashion one would use samples from a MCMC method after learning. The two main advantages of herding are that:- it is computationally cheaper than traditional learning of Markov Random Field by learning & doing inference in one deterministic step;
- the resulting pseudo-samples exhibit strong anticorrelations, thereby providing a more uniform coverage of the space—this means that the number of (pseudo)samples needed to approximate relevant integrals is substantially smaller than in the case of random sampling (namely, has O(1/T) convergence vs. O(1/sqrt(T)) for iid sampling).
- Herding Dynamical Weights to Learn, Max Welling, ICML 2009 - pdf
which introduced herding as an algorithm for learning/sampling in the context of Markov random fields (and exhibits property 1. mentioned above). The first part therefore concentrates on links with previous approaches to learning under Markov random fields.
[Note that if you are interested enough in the topic to read very carefully this paper, there is a correction in the proof of recurrence here .]
- Super-Samples from Kernel Herding, Yutian Chen, Max Welling, Alex
Smola, UAI 2010 - pdf
in which kernel herding is introduced and present a more modern viewpoint on herding in the context of approximating distributions (and where property 2. mentioned above is exploited). Herding here can be understood as a greedy optimisation for approximating a probability distribution with the empirical distribution of pseudosamples in such a way that certain nonlinear moments of the original distribution are preserved. Connections to message passing, Monte Carlo, and other relevant methods will be discussed.
Series This talk is part of the Machine Learning Reading Group @ CUED series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Computational Continuum Mechanics Group Seminars
- custom
- Engineering Department, CBL Room 438
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group Journal Clubs
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Machine Learning Reading Group
- Machine Learning Reading Group @ CUED
- Machine Learning Summary
- ML
- ndk22's list
- ob366-ai4er
- Quantum Matter Journal Club
- Required lists for MLG
- rp587
- School of Technology
- Simon Baker's List
- TQS Journal Clubs
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 21 October 2010, 14:00-15:30