Regularized linear autoencoders, the Morse theory of loss, and backprop in the brain
- ๐ค Speaker: Jon Bloom (Broad Institute of MIT and Harvard)
- ๐ Date & Time: Monday 24 June 2019, 14:00 - 15:00
- ๐ Venue: MR12
Abstract
When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. We prove that L2-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA . Finally, we consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of deep learning.
ICML 2019 .
Series This talk is part of the Statistics series.
Included in Lists
- All CMS events
- All Talks (aka the CURE list)
- bld31
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- CMS Events
- custom
- DPMMS info aggregator
- DPMMS lists
- DPMMS Lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Machine Learning
- MR12
- rp587
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Jon Bloom (Broad Institute of MIT and Harvard)
Monday 24 June 2019, 14:00-15:00