On Sparsity and Overcompleteness in Image Models
- đ¤ Speaker: Pietro Berkes (and Richard Turner)
- đ Date & Time: Wednesday 23 April 2008, 14:00 - 15:00
- đ Venue: TCM Seminar Room, Cavendish Laboratory, Department of Physics
Abstract
The principles that underlie the structure of receptive fields in the primary visual cortex are not well understood. One theory is that they emerge from information processing constraints, and that two basic principles in particular play a key role. The first principle is that of sparsity. Both neural firing rates and visual statistics are sparsely distributed, and sparse models for images have been successful in reproducing some of the characteristics of simple cell receptive fields (RFs) in V1 [1,2]. The second principle is overcompleteness. The number of neurons in V1 is 100—300 times larger than the number of neurons in the LGN . It has often been assumed that sparse, overcomplete codes might lend some computational advantage in the processing of visual information [3,4]. The goal of this work is to investigate this claim.
Many different sparse-overcomplete models for visual processing have been proposed. These have largely been evaluated on the basis of their correspondance with neural properties (RF frequency, orientation, and aspect ratio after learning), on their effectiveness in denoising natural images, or on the efficiency with which they can be used to encode natural images. However, only rarely have the questions about the degree of sparsity, the form of the sparsity, as well as the overcompleteness level, been addressed.
Here we formalise such questions of optimality in the context of Bayesian model selection, treating both the degree of sparsity and the extent of overcompleteness as parameters within a probabilistic model, that must be learnt from natural image data. In the Bayesian framework, models are compared based on their marginal likelihoods, a measure which reflects their ability to fit the data, but also incoporates a Bayesian equivalent of Occam’s razor by automatically penalizing models with more parameters than are supported by the data. We compare different sparse coding models and show that the optimal model seems to be indeed very sparse but, perhaps surprisingly, only modestly overcomplete. Thus, according to our results, linear sparse coding models are not sufficient to explain the presence of an overcomplete code in the primary visual cortex.
Series This talk is part of the Inference Group series.
Included in Lists
- All Cavendish Laboratory Seminars
- All Talks (aka the CURE list)
- Biology
- Cambridge Neuroscience Seminars
- Cambridge talks
- Centre for Health Leadership and Enterprise
- Chris Davis' list
- dh539
- dh539
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group
- Inference Group Summary
- Interested Talks
- Joint Machine Learning Seminars
- Life Science
- Life Sciences
- Machine Learning Summary
- ME Seminar
- ML
- Neurons, Fake News, DNA and your iPhone: The Mathematics of Information
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- Required lists for MLG
- rp587
- School of Physical Sciences
- Stem Cells & Regenerative Medicine
- TCM Seminar Room, Cavendish Laboratory, Department of Physics
- Thin Film Magnetic Talks
- yk373's list
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 23 April 2008, 14:00-15:00