BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Artificial neurons meet real neurons: pattern selectivity in V4 vi
 a deep learning - Bin Yu (University of California\, Berkeley)
DTSTART:20160714T123000Z
DTEND:20160714T130000Z
UID:TALK66755@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:<span>Co-authors: Yuansi Chen (UCB)\, Reza Abbasi Asl  (UCB)\,
  Adam Bloniarz (UCB)\, Jack Gallant (UCB) <br></span> <span><br>Vision in 
 humans and in non-human primates is mediated by a constellation of  hierar
 chically organized visual areas. One important area is V4\, a large  retin
 otopically-organized area located intermediate between primary visual  cor
 tex and high-level areas in the inferior temporal lobe. V4 neurons have  h
 ighly nonlinear response properties. Consequently\, it has been difficult 
 to  construct quantitative models that accurately describe how visual info
 rmation is  represented in V4. To better understand the filtering properti
 es of V4 neurons  we recorded from 71 well isolated cells stimulated with 
 natural images. We fit  predictive models of neuron spike rates using tran
 sformations of natural images  learned by a convolutional neural network (
 CNN). The CNN was trained for image  classification on the ImageNet datase
 t. To derive a model for each neuron\, we  first propagate each of the sti
 mulus images forward to an inner layer of the  CNN. We use the activations
  of the inner layer as the featu re (predictor)  vector in a high dimensio
 nal regression\, where the response rate of the V4  neuron is taken as the
  response vector. Thus\, the final model for each neuron  consists of a mu
 ltilayer nonlinear transformation provided by the CNN\, and one  final lin
 ear layer of weights provided by regression. We find that models using  th
 e first two layers of three well-known CNNs provide better predictions of 
  responses of V4 neurons than those obtained using a conventional Gabor-li
 ke  wavelet model. To characterize the spatial and pattern selectivity of 
 each V4  neuron\, we both explicitly optimize the input image to maximize 
 the predicted  spike rate\, and visualize the selected filters of the CNN.
  We also perform  dimensionality reduction by sparse PCA to visualize the 
 population of neurons.  Finally\, we show the stability of our analysis ac
 ross the three CNNs\, and  conclude that the V4 neurons are tuned to a rem
 arkable diversity of shapes such  as curves\, blobs\, checkerboard pattern
 s\, and V1-like gratings.&nbsp\;</span>
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
