BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Contextualized embeddings for lexical semantics - Katrin Erk\, Uni
 versity of Texas at Austin
DTSTART:20221020T140000Z
DTEND:20221020T150000Z
UID:TALK174146@talks.cam.ac.uk
CONTACT:Marinela Parovic
DESCRIPTION:Word embeddings are dense vector representations computed auto
 matically from large amounts of text. From a lexical semantics perspective
 \, we can view an embedding as a compact aggregate of many observed word u
 ses\, from different speakers. Especially contextualized word embeddings a
 re highly interesting for lexical semantics because they give us a potenti
 al window into garden-variety polysemy: polysemy that is entirely idiosync
 ratic\, not regular. But there is not yet a standardized way to use contex
 tualized embeddings for lexical semantics. I report on two studies we have
  been doing. In the first\, we tested the use of word token clusters on th
 e task of type-level similarity. In the second\, we are mapping word token
  embeddings to human-readable features. I also comment on a trend in word 
 embeddings\, from count-based embeddings to the most recent contextualized
  embeddings\, to pick up on what could be called traces of stories: text t
 opics\, judgments and sentiment\, and cultural trends.  I argue that this 
 is actually an interesting signal and not a bug.
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
