BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Computational Neuroscience Journal Club - Kris Jensen and Wayne So
 o
DTSTART:20220531T140000Z
DTEND:20220531T153000Z
UID:TALK175247@talks.cam.ac.uk
CONTACT:Jake Stroud
DESCRIPTION:Please join us for our fortnightly journal club online via zoo
 m where two presenters will jointly present a topic together. The next top
 ic is ‘Transformers in computational neuroscience – advances and futur
 e directions’ presented by Kris Jensen and Wayne Soo.\n\nZoom informatio
 n: https://us02web.zoom.us/j/84958321096?pwd=dFpsYnpJYWVNeHlJbEFKbW1OTzFiQ
 T09 Meeting ID: 849 5832 1096 Passcode: 506576\n\nSummary:\nSince first ri
 sing to prominence 5 years ago\, models based on the ‘transformer’ arc
 hitecture have taken the field of machine learning by storm. This has incl
 uded impressive advances on tasks ranging from image recognition and langu
 age modelling to predicting protein folding and automatically generating c
 ode. Meanwhile\, transformers have continued to play a fairly minor role i
 n systems and computational neuroscience\, although some recent work has d
 emonstrated potential uses of transformers both for neural data analysis a
 nd as explicit models of neural circuits. In this tutorial/discussion\, we
  will first provide an introduction to the transformer architecture and go
  through a notebook implementing a minimal ‘vision transformer’. We wi
 ll then talk briefly about some of the many impressive transformer-based a
 dvances in machine learning. Finally\, we will highlight recent work that 
 uses transformers in neuroscience and discuss the possible roles such atte
 ntion-based architectures in the field moving forwards.\n\nRelevant litera
 ture (machine learning):\nVaswani et al. (2017): “Attention is all you n
 eed”\n\nDosovitskiy et al. (2020): “An image is worth 16x16 words: Tra
 nsformers for image recognition at scale”\n\nRelevant literature (neuros
 cience):\nYe & Pandarinath (2021): “Representation learning for neural p
 opulation activity with Neural Data Transformers”\n\nWhittington et al. 
 (2021): “Relating transformers to models and neural representations of t
 he hippocampal formation ”
LOCATION:Online on Zoom
END:VEVENT
END:VCALENDAR
