BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Modelling implicit language learning with distributional semantics
  - Dimitris Alikaniotis\, DTAL\, University of Cambridge
DTSTART:20150501T110000Z
DTEND:20150501T113000Z
UID:TALK59288@talks.cam.ac.uk
CONTACT:Tamara Polajnar
DESCRIPTION:In distributional semantics\, words acquire their meaning by e
 xploiting the statistical information that is inherent in their linguistic
  environment. A common criticism towards such representations is that they
  do not explicitly encode semantic features unlike more traditional models
  of semantic memory\, questioning the cognitive relevance of such statisti
 cal mechanisms during language learning and processing. Here we show that 
 distributional semantic models provide a good fit to data obtained from _i
 mplicit_ language learning experiments on adults. In these experiments par
 ticipants are introduced to novel non-words\, which co-occur with already 
 known words conditioned on underlying semantic regularities\, such as conc
 rete/abstract\, animate/inanimate. Participants can implicitly learn such 
 underlying semantic regularities\, although whether they do so depends upo
 n the nature of the conceptual distinction involved and their first langua
 ge. Using datasets provided from four behavioural experiments\, which empl
 oyed different semantic manipulations\, we obtained generalisation gradien
 ts that matched closely those of humans\, capturing the effects of various
  conceptual distinctions and cross-linguistic differences.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
