DISCO: Dynamical Integration Systems for Convergence Optimisation in Distributed Low-Communication Training
- đ¤ Speaker: Finn Anderson
- đ Date & Time: Friday 07 November 2025, 11:00 - 12:00
- đ Venue: Computer Lab, SS03
Abstract
Distributed training faces fundamental challenges from client heterogeneity in compute, memory, and network conditions. Existing approaches use staleness-dependent decay, per-client adjustments, or distance-weighted averaging, but often lack substantial convergence guarantees. I present DISCO , a control-theoretic framework that recasts federated learning as a linear time-delay system. Using Lyapunov stability analysis, DISCO derives lightweight online adjustments with verifiable convergence bounds. Deployed on a Raspberry Pi 4 cluster, DISCO achieves 3.0â4.0Ã faster time-to-accuracy across text classification benchmarks. This work demonstrates how dynamical-systems theory enables provably efficient federated learning on commodity hardware.
Bio: Finn is an Engineer at Cedana AI with an MSc in Computer Science from UCL . His background spans machine learning and financial mathematics, with a focus on distributed training systems. His research explores homomorphic encryption, hardware acceleration, and distributed algorithms for efficient training at scale. Finn is particularly passionate about hardware-aware optimization and designing training workloads for commodity devices.
Series This talk is part of the Cambridge ML Systems Seminar Series series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge talks
- Computer Lab, SS03
- Department of Computer Science and Technology talks and seminars
- Interested Talks
- School of Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Finn Anderson
Friday 07 November 2025, 11:00-12:00