BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Le
 arning - Nicholas Zolman (University of Washington)
DTSTART:20240214T140000Z
DTEND:20240214T150000Z
UID:TALK212419@talks.cam.ac.uk
CONTACT:Matthew Colbrook
DESCRIPTION:Deep Reinforcement Learning (DRL) has shown significant promis
 e for uncovering sophisticated control policies that interact in environme
 nts with complicated dynamics\, such as stabilizing the magnetohydrodynami
 cs of a tokamak reactor and minimizing the drag force exerted on an object
  in a fluid flow. However\, these algorithms require many training example
 s and can become prohibitively expensive for many applications. In additio
 n\, the reliance on deep neural networks results in an uninterpretable\, b
 lack-box policy that may be too computationally challenging to use with ce
 rtain embedded systems. Recent advances in sparse dictionary learning\, su
 ch as the Sparse Identification of Nonlinear Dynamics (SINDy)\, have shown
  to be a promising method for creating efficient and interpretable data-dr
 iven models in the low-data regime. In this work\, we extend ideas from th
 e SINDy literature to introduce a unifying framework for combining sparse 
 dictionary learning and DRL to create efficient\, interpretable\, and trus
 tworthy representations of the dynamics model\, reward function\, and cont
 rol policy. We demonstrate the effectiveness of our approaches on benchmar
 k control environments and challenging fluids problems\, achieving compara
 ble performance to state-of-the-art DRL algorithms using significantly few
 er interactions in the environment and an interpretable control policy ord
 ers of magnitude smaller than a deep neural network policy.
LOCATION:Centre for Mathematical Sciences\, MR14
END:VEVENT
END:VCALENDAR
