Strong Structural Priors for Neural Network Architectures
- 👤 Speaker: Tim Rocktäschel ( UCL) 🔗 Website
- 📅 Date & Time: Friday 10 June 2016, 12:00 - 13:00
- 📍 Venue: FW26, Computer Laboratory
Abstract
Many current state-of-the-art methods in natural language processing and information extraction rely on representation learning. Despite the success and wide adoption of neural networks in the field, we still face major challenges such as (i) efficiently estimating model parameters for domains where annotation is costly and only few training examples are available, (ii) interpretable representations that allow inspection and debugging of deep neural networks, as well as (iii) ways to incorporate commonsense knowledge and task-specific prior knowledge. To tackle these issues, advanced neural network architectures such as differentiable memory, attention, data structures and even Turing machines, program interpreters and theorem provers have been proposed very recently. In this talk I will give an overview of our work on such strong structural priors for sequence modeling, knowledge base completion and program induction.
Series This talk is part of the NLIP Seminar Series series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Computer Education Research
- Computing Education Research
- Department of Computer Science and Technology talks and seminars
- FW26, Computer Laboratory
- Graduate-Seminars
- Guy Emerson's list
- Interested Talks
- Language Sciences for Graduate Students
- ndk22's list
- NLIP Seminar Series
- ob366-ai4er
- PMRFPS's
- rp587
- School of Technology
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)



Friday 10 June 2016, 12:00-13:00