BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:1) Mining Users' Significant Driving Routes with Low-power Sensors
  2) DSP.Ear: Leveraging Co-Processor Support for Continuous Audio Sensing 
 on Smartphones - Sarfraz Nawaz and Petko Georgiev  (University of Cambridg
 e)
DTSTART:20141023T140000Z
DTEND:20141023T150000Z
UID:TALK54153@talks.cam.ac.uk
CONTACT:Eiko Yoneki
DESCRIPTION:2 practice talks for SenSys 2014.\n\n1) While there is signifi
 cant work on sensing and recognition of significant places for users\, lit
 tle attention has been given to users' significant routes. Recognizing the
 se routine journeys\, can open doors for the development of novel  applica
 tions\, like personalized travel alerts\, and enhancement of user's travel
  experience. However\, the high energy consumption of traditional location
  sensing technologies\, such as GPS or WiFi based localization\, is a barr
 ier to passive and ubiquitous route sensing through smartphones.\n\nIn thi
 s paper\, we present a passive route sensing framework that continuously m
 onitors a vehicle user solely through a phone's gyroscope and acceleromete
 r. This approach can differentiate and recognize various routes taken by t
 he user by time warping angular speeds experienced by the phone while in t
 ransit and is independent of phone orientation and location within the veh
 icle\, small detours and traffic conditions. We compare the route learning
  and recognition capabilities of this approach with GPS trajectory analysi
 s and show that it achieves similar performance. Moreover\, with an embedd
 ed co-processor\, common to most new generation phones\, it achieves energ
 y savings of an order of magnitude over the GPS sensor.\n\n2) The rapidly 
 growing adoption of sensor-enabled smartphones has greatly fueled the prol
 iferation of applications that use phone sensors to monitor user behavior.
  A central sensor among these is the microphone which enables\, for instan
 ce\, the detection of valence in speech\, or the identification of speaker
 s. Deploying multiple of these applications on a mobile device to continuo
 usly monitor the audio environment allows for the acquisition of a diverse
  range of sound-related contextual inferences. However\, the cumulative pr
 ocessing burden critically impacts the phone battery.\n\nTo address this p
 roblem\, we propose DSP.Ear -- an integrated sensing system that takes adv
 antage of the latest low-power DSP co-processor technology in commodity mo
 bile devices to enable the continuous and simultaneous operation of multip
 le established algorithms that perform complex audio inferences. The syste
 m extracts emotions from voice\, estimates the number of people in a room\
 , identifies the speakers\, and detects commonly found ambient sounds\, wh
 ile critically incurring little overhead to the device battery. This is ac
 hieved through a series of pipeline optimizations that allow the computati
 on to remain largely on the DSP. Through detailed evaluation of our protot
 ype implementation we show that\, by exploiting a smartphone's co-processo
 r\, DSP.Ear achieves a\n3 to 7 times increase in the battery lifetime comp
 ared to a solution that uses only the phone's main processor. In addition\
 , DSP.Ear is 2 to\n3 times more power efficient than a naive DSP solution 
 without optimizations. We further analyze a large-scale dataset from 1320 
 Android users to show that in about 80-90% of the daily usage instances DS
 P.Ear is able to sustain a full day of operation (even in the presence of 
 other smartphone workloads) with a single battery charge.\n\n
LOCATION:FW26\, Computer Laboratory\, William Gates Builiding
END:VEVENT
END:VCALENDAR
