BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Does Syntax Still Matter in the World of LLMs? - Miloš Stanojevi
 ć (DeepMind)
DTSTART:20231006T110000Z
DTEND:20231006T120000Z
UID:TALK206188@talks.cam.ac.uk
CONTACT:Michael Schlichtkrull
DESCRIPTION:Abstract: \n\nLarge Language Models (LLMs) have shown impressi
 ve results in a recent period to the extent that some cognitive scientists
  are claiming that syntactic theories should be abandoned as an explanatio
 n of human language in favour of LLMs. I will provide evidence that syntax
  is still beneficial both in scientific and engineering pursuits with huma
 n language. First\, LLMs do not provide a prediction nor an explanation of
  what are the universal properties of all human languages\, unlike the syn
 tactic theory considered here. Second\, human brain activity of some brain
  regions can be accounted for better by an incremental syntactic parser th
 an by a LLM surprisal. Finally\, LLMs can work even better if augmented wi
 th a syntactic compositional structure. If that is so\, you might ask\, wh
 y is syntax not more popular in NLP then? I believe it is because the mode
 rn hardware accelerators (GPUs and TPUs) are not optimal for tree-like com
 putation so it is difficult to train large scale syntactic models. To acco
 unt for that we have created a JAX library\, called SynJAX\, that makes it
  easier to build syntactic models that run efficiently on GPU/TPU.\n\nBio:
  \n\nMiloš Stanojević is a Senior Research Scientist in Google DeepMind.
  Prior to that he did a PostDoc at the University of Edinburgh with Mark S
 teedman where he worked on Combinatory Categorial Grammars (CCG)\, and col
 laborated with Ed Stabler on Minimalist Grammars. He has received a PhD de
 gree from University of Amsterdam for the work on machine translation. His
  main research interest is in bridging the gap between theoretical linguis
 tics and natural language processing by bringing the right inductive biase
 s to the machine learning models of language.
LOCATION:Computer Laboratory\, room SS03
END:VEVENT
END:VCALENDAR
