AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks
- 👤 Speaker: Ryota Tomioka (MSR)
- 📅 Date & Time: Tuesday 07 November 2017, 14:00 - 15:00
- 📍 Venue: Centre for Mathematical Sciences, MR2
Abstract
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
Series This talk is part of the Mathematics and Machine Learning series.
Included in Lists
- All CMS events
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Centre for Mathematical Sciences, MR2
- Chris Davis' list
- CMS Events
- DPMMS info aggregator
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Mathematics and Machine Learning
- Mathematics and Machine Learning
- ndk22's list
- ob366-ai4er
- rp587
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 07 November 2017, 14:00-15:00