Multimodal Inference and Assistance for Effortless XR Interaction
- π€ Speaker: Aakar Gupta, Fujitsu Research America π Website
- π Date & Time: Thursday 07 March 2024, 11:00 - 12:00
- π Venue: Sir Arthur Marshall Room, Engineering Design Centre, CUED
Abstract
Spatial computing wearables promise to usher in the third wave of computing devices. How these devices facilitate effective user interaction is an open question. In this talk, I propose that leveraging multimodal capabilities along with intelligent inference techniques yields highly performant and effortless interactions. I’ll discuss multiple projects that use multimodal information such as gaze and hand dynamics to implicitly infer and enable the user’s intent. I’ll further discuss how rich wearable haptics can be designed to aid user interaction in XR.
Speaker Bio: Aakar is a Principal Researcher at Fujitsu Research America. Prior to this, he worked as a Research Scientist at Meta Reality Labs Research for four years. He did his PhD in Computer Science from University of Toronto. Aakar’s primary research area is in computational and AI-assisted interactions for spatial computing. Prior to his PhD, Aakar worked on technology interventions for underserved users in India in collaboration with Microsoft Research Bangalore. His work has resulted in 30+ publications at top-tier HCI venues such as CHI and UIST including four Best Paper Honorable Mention Awards.
Series This talk is part of the jjd50's list series.
Included in Lists
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Aakar Gupta, Fujitsu Research America 
Thursday 07 March 2024, 11:00-12:00