BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:LEO: Scheduling Sensor Inference Algorithms across Heterogeneous M
 obile Processors and Network Resources - Petko Georgiev (Computer Lab)
DTSTART:20160929T140000Z
DTEND:20160929T150000Z
UID:TALK67277@talks.cam.ac.uk
CONTACT:Liang Wang
DESCRIPTION:Mobile apps that use sensors to monitor user behavior often em
 ploy resource heavy inference algorithms that make computational offloadin
 g a common practice. However\, existing schedulers/offloaders typically em
 phasize one primary offloading aspect without fully exploring complementar
 y goals (e.g.\, heterogeneous resource management with only partial visibi
 lity into underlying algorithms\, or concurrent sensor app execution on a 
 single resource) and as a result\, may overlook performance benefits perti
 nent to sensor processing. We bring together key ideas scattered in existi
 ng offloading solutions to build LEO – a scheduler designed to maximize 
 the performance for the unique workload of continuous and intermittent mob
 ile sensor apps without changing their inference accuracy. LEO makes use o
 f domain specific signal processing knowledge to smartly distribute the se
 nsor processing tasks across the broader range of heterogeneous computatio
 nal resources of high-end phones (CPU\, co-processor\, GPU and the cloud).
  To exploit short-lived\, but substantial optimization opportunities\, and
  remain responsive to the needs of near real-time apps such as voice-based
  natural user interfaces\, LEO runs as a service on a low-power co-process
 or unit (LPU) to perform both frequent and joint schedule optimization for
  concurrent pipelines. Depending on the workload and network conditions\, 
 LEO is between 1.6 and 3 times more energy efficient than conventional clo
 ud offloading with CPU-bound sensor sampling. In addition\, even if a gene
 ral-purpose scheduler is optimized directly to leverage an LPU\, we find L
 EO still uses only a fraction (< 1/7) of the energy overhead for schedulin
 g and is up to 19% more energy efficient for medium to heavy workloads.
LOCATION:FW26\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
