BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Exponential Family Embeddings - David Blei (Columbia University)
DTSTART:20160711T150000Z
DTEND:20160711T153000Z
UID:TALK66703@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:Word embeddings are a powerful approach for capturing semantic
 &nbsp\; similarity among terms in a vocabulary. &nbsp\;In this talk\,&nbsp
 \;I will describe&nbsp\;exponential family embeddings\, a class of methods
  that&nbsp\; extends the idea of word embeddings to other types of&nbsp\; 
 high-dimensional data. As examples\, we studied neural data with&nbsp\;rea
 l-valued observations\, count data from a market basket analysis\,&nbsp\;a
 nd ratings data from a movie recommendation system. &nbsp\;We then extende
 d this idea to networks\, giving a novel type of "latent space" model. &nb
 sp\;The main idea behind an EF-EMB&nbsp\;is to model each observation cond
 itioned on a set of other&nbsp\; observations. &nbsp\;This set is called t
 he context\, and the way the&nbsp\; context is defined is a modeling choic
 e that depends on the problem.&nbsp\; In language the context is the surro
 unding words\; in neuroscience&nbsp\; the context is close-by neurons\; in
  market basket data the context&nbsp\;is other items in the shopping cart\
 ; in networks the context is&nbsp\;edges emanating from a node pair. &nbsp
 \;Each type of embedding model&nbsp\; defines the context\, the exponentia
 l family of conditional&nbsp\; distributions\, and how the latent embeddin
 g vectors are shared&nbsp\; across data. We infer the embeddings with a sc
 alable algorithm based&nbsp\; on stochastic gradient descent. &nbsp\;We fo
 und&nbsp\;exponential family embedding models to be more&nbsp\;effective t
 han other types of dimension reduction. &nbsp\;They better&nbsp\;reconstru
 ct held-out data and find interesting qualitative&nbsp\; structure.<br>
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
