Feature Learning in Two-layer Neural Networks: The Effect of Data Covariance
- đ¤ Speaker: Murat A. Erdogdu (University of Toronto) đ Website
- đ Date & Time: Wednesday 24 April 2024, 14:00 - 15:00
- đ Venue: Centre for Mathematical Sciences, MR14
Abstract
We study the effect of gradient-based optimization on feature learning in two-layer neural networks. We consider a setting where the number of samples is of the same order as the input dimension and show that, when the input data is isotropic, gradient descent always improves upon the initial random features model in terms of prediction risk, for a certain class of targets. Further leveraging the practical observation that data often contains additional structure, i.e., the input covariance has non-trivial alignment with the target, we prove that the class of learnable targets can be significantly extended, demonstrating a clear separation between kernel methods and two-layer neural networks in this regime.
Series This talk is part of the Applied and Computational Analysis series.
Included in Lists
- All CMS events
- All Talks (aka the CURE list)
- Applied and Computational Analysis
- bld31
- Centre for Mathematical Sciences, MR14
- CMS Events
- DAMTP info aggregator
- Featured lists
- Interested Talks
- My seminars
- Type the title of a new list here
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)



Wednesday 24 April 2024, 14:00-15:00