BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Serial Order in Behavior:  Unifying Acoustic Meaning and Rhythm in
  Audition\, Speech\, and Music - Stephen Grossberg
DTSTART:20150911T140000Z
DTEND:20150911T153000Z
UID:TALK60303@talks.cam.ac.uk
CONTACT:Sarah Hawkins
DESCRIPTION:How do conscious percepts of auditory events arise\, whether i
 n recognizing discrete acoustic sources\, or auditory streams of speakers 
 or music? What are the functional units and processing levels that control
  such conscious percepts? How are cognitive working memories designed to t
 emporarily store sequences of acoustic items and to support learning and s
 table memory of the unitized acoustic units\, called list chunks\, that or
 ganize conscious recognition? How does a hierarchy of processing stages ge
 nerate a working memory representation of acoustic event sequences that is
  increasingly rate-invariant and frequency-normalized? How do familiar inv
 ariant representations of sequence meaning interact with\, and help to det
 ermine\, the perceived rhythm with which a sequence is heard? How are new 
 rhythms flexibly used to recall the same sequence\, as when a song or spee
 ch utterance is made with a different rhythm? In particular\, how does the
  cortical stream for generating invariant sequences interact with the cort
 ical stream for representing auditory scene analysis and its frequency-spe
 cific and rhythm-sensitive properties\, as occurs during the perception of
  pitch and timbre? Why do all working memories\, whether linguistic\, spat
 ial\, or motor\, share basic neural designs\, and thus generate similar te
 mporal order and error distribution properties? How does the brain integra
 te contextual information over many milliseconds to disambiguate noise-occ
 luded acoustical signals? How are sound sequences that are heard in noise 
 consciously heard in the correct temporal order\, even when noise-occluded
  sounds are disambiguated by contexts that may occur many milliseconds bef
 ore or after each sound is presented? These questions get a unified answer
  in Adaptive Resonance Theory\, or ART\, which is currently the most advan
 ced theory of how primate brains learn to attend\, recognize\, value\, and
  predict a changing world. ART predicts that all conscious states are reso
 nant states\, that consciously heard acoustic sequences are represented by
  resonant waves\, and that perceived silence is a temporal discontinuity i
 n the rate that such a resonant wave evolves through time. ART has begun t
 o classify the resonances that underlie conscious experiences of seeing\, 
 hearing\, knowing\, and feeling\, as part of its analysis of how brain pro
 cesses of consciousness\, learning\, expectation\, attention\, resonance\,
  and synchrony interact.\n\nThere will be a short wine reception after the
  talk.\nIf you intend to come\, please tell Sarah Hawkins (sh110@cam). Lik
 ewise if you wish to join us for dinner on Friday evening (all welcome)\, 
 or to speak privately about your work with Stephen Grossberg. 
LOCATION:Faculty of English\, 9 West Road\, CB3 9DP: Room GR06/07
END:VEVENT
END:VCALENDAR
