BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:On Sparsity and Overcompleteness in Image Models - Pietro Berkes (
 and Richard Turner)
DTSTART:20080423T130000Z
DTEND:20080423T140000Z
UID:TALK11277@talks.cam.ac.uk
CONTACT:Philip Sterne
DESCRIPTION:The principles that underlie the structure of receptive fields
  in the primary visual cortex are not well understood. One theory is that 
 they emerge from information processing constraints\, and that two basic p
 rinciples in particular play a key role. The first principle is that of sp
 arsity. Both neural firing rates and visual statistics are sparsely distri
 buted\, and sparse models for images have been successful in reproducing s
 ome of the characteristics of simple cell receptive fields (RFs) in V1 [1\
 ,2]. The second principle is overcompleteness. The number of neurons in V1
  is 100--300 times larger than the number of neurons in the LGN. It has of
 ten been assumed that sparse\, overcomplete codes might lend some computat
 ional advantage in the processing of visual information [3\,4]. The goal o
 f this work is to investigate this claim.\n\nMany different sparse-overcom
 plete models for visual processing have been proposed. These have largely 
 been evaluated on the basis of their correspondance with neural properties
  (RF frequency\, orientation\, and aspect ratio after learning)\, on their
  effectiveness in denoising natural images\, or on the efficiency with whi
 ch they can be used to encode natural images. However\, only rarely have t
 he questions about the degree of sparsity\, the form of the sparsity\, as 
 well as the overcompleteness level\, been addressed.\n\nHere we formalise 
 such questions of optimality in the context of Bayesian model selection\, 
 treating both the degree of sparsity and the extent of overcompleteness as
  parameters within a probabilistic model\, that must be learnt from natura
 l image data. In the Bayesian framework\, models are compared based on the
 ir marginal likelihoods\, a measure which reflects their ability to fit th
 e data\, but also incoporates a Bayesian equivalent of Occam's razor by au
 tomatically penalizing models with more parameters than are supported by t
 he data. We compare different sparse coding models and show that the optim
 al model seems to be indeed very sparse but\, perhaps surprisingly\, only 
 modestly overcomplete. Thus\, according to our results\, linear sparse cod
 ing models are not sufficient to explain the presence of an overcomplete c
 ode in the primary visual cortex.\n\n
LOCATION:TCM Seminar Room\, Cavendish Laboratory\, Department of Physics
END:VEVENT
END:VCALENDAR
