Analysis and Applications of Deep Cascade Learning
- đ¤ Speaker: Doris Xin Du
- đ Date & Time: Friday 10 June 2022, 13:00 - 14:00
- đ Venue: This talk will be held in a hybrid format at MR 2, Centre of Mathematical Sciences, CB3 0WA. Alternatively you can also join with Zoom (see abstract for Zoom link).
Abstract
This talk is on the analysis and applications of a constructive architecture for training Deep Neural Networks (DNNs), which are usually trained by End-to-End (E2E) gradient propagation with fixed depths. E2E training of DNNs has proven to offer impressive performances in a number of applications such as computer vision, machine translation and in playing complex games such as GO. However, the massive cost in computing and memory hinders its applications in many areas, such as portable medical devices. Moreover, the majority of DNNs are data hungry which raises further barriers of applications in regions where data collection or labelling is expensive. As an alternative, Cascade Learning (CL), the approach of interest here, trains networks in a layer-wise fashion and has been demonstrated to achieve satisfactory performance in large scale tasks such as the popular ImageNet benchmark dataset, at substantially reduced computing and memory requirements. Here we focus on the nature of features extracted from CL. By attempting to explain the process of learning using the Information Bottleneck theory, an empirical rule (Information Transition Ratio) is derived to automatically determine a satisfactory depth for Deep Neural Networks. We suggest that CL packs information in a hierarchical manner, with coarse features in early layers and more task specific features in later layers. This is verified by considering Transfer Learning whereby features learned from a data-rich source domain assist in learning a data-sparse target domain. Using a wide range of inference problems in medical imaging, human activity recognition and inference from single cell gene expression between mice and humans, Transfer Learning from a cascade trained model shows significant advantages in small data regime.
The seminar will be held in a hybrid format. We strongly encourage you to participate in person at MR 2 , Centre of Mathematical Sciences, CB3 0WA . Althernatively you can join using the following Zoom link:
Join Zoom Meeting https://maths-cam-ac-uk.zoom.us/j/99801605037?pwd=bnlTanhLUVRKenBmTGNYckR4dnI1QT09
Meeting ID: 998 0160 5037
Security Passcode: Â 435654
Series This talk is part of the CMIH Hub seminar series series.
Included in Lists
- All CMS events
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Chris Davis' list
- CMIH Hub seminar series
- Interested Talks
- ndk22's list
- ob366-ai4er
- rp587
- This talk will be held in a hybrid format at MR 2, Centre of Mathematical Sciences, CB3 0WA. Alternatively you can also join with Zoom (see abstract for Zoom link).
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Friday 10 June 2022, 13:00-14:00