Continuous feature structures: Can we learn structured representations with neural networks?
- đ¤ Speaker: Guy Emerson, NLIP, University of Cambridge
- đ Date & Time: Friday 07 June 2019, 12:00 - 13:00
- đ Venue: FW26, Computer Laboratory
Abstract
The basic data structure for neural network models is the vector. While this is computationally efficient, vector representations require a fixed number of dimensions, which makes it impossible to encode even basic data structures that would be familiar to a first-year undergraduate, such as lists, trees, and graphs. In this talk, I will focus on feature structures, a general-purpose data structure which is notably used in HPSG grammars. One challenge with learning such structured representations is that they are discrete, which rules out training with gradient descent. In this talk, I will present a continuous relaxation of feature structures, which allows them to be used in neural networks trained by gradient descent. In particular, I will show how these continuous feature structures can replace vectors in an LSTM , which would make it possible to learn feature structure representations of sentences.
Series This talk is part of the NLIP Seminar Series series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Computer Education Research
- Computing Education Research
- Department of Computer Science and Technology talks and seminars
- FW26, Computer Laboratory
- Graduate-Seminars
- Guy Emerson's list
- Interested Talks
- Language Sciences for Graduate Students
- ndk22's list
- NLIP Seminar Series
- ob366-ai4er
- PMRFPS's
- rp587
- School of Technology
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Guy Emerson, NLIP, University of Cambridge
Friday 07 June 2019, 12:00-13:00