BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Control and Adaptation Under Latent Risk and Nonstationary Interac
 tions - Yorie Nakahira\, Carnegie Mellon University
DTSTART:20251127T140000Z
DTEND:20251127T150000Z
UID:TALK240547@talks.cam.ac.uk
CONTACT:Fulvio Forni
DESCRIPTION:Autonomous systems must operate safely in uncertain\, interact
 ive\, and nonstationary environments alongside humans. To enhance their ca
 pabilities\, we are developing techniques for stochastic safe control\, un
 certainty and risk quantification\, adaptation\, and language-guided contr
 ol. \n\nIn this talk\, we begin with quantifying and assuring safety from 
 data in the presence of latent risks. Many systems contain unobservable va
 riables that render system dynamics partially unidentifiable or induce dis
 tribution shifts between offline and online statistics\, even when the und
 erlying dynamics remain unchanged. Such “spurious” distribution shifts
  often break standard approaches for risk quantification and stochastic sa
 fe control. To overcome this\, we propose a framework for designing data-d
 riven safety certificates for systems with latent risks. On the inference 
 side\, the framework employs physics-informed learning/RL to estimate long
 -term risk from data without sufficient risk events\, while exploiting str
 uctural properties such as low-dimensional representations of risk or grap
 h decompositions of multi-agent systems. On the control side\, it builds o
 n a new notion of invariance\, termed probabilistic invariance\, which all
 ows safety conditions to be constructed from data\, despite spurious distr
 ibution shifts and partially unidentifiable dynamics.\n\nNext\, we introdu
 ce our work toward achieving lifelong safety in systems with self-seeking 
 humans or adaptive opponents. Through modeling and experiments with human 
 subjects\, we find that worst-case control can inadvertently induce advers
 arial opponent adaptation that increases risk in future interactions. This
  observation mirrors the empirical literature on social dilemmas and human
  risk compensation\, yet its implications for lifelong risk in nonstationa
 ry interactions have rarely been investigated. This result also suggests a
 n underexplored potential to proactively shape desirable opponent adaptati
 ons for enhanced performance and safety. \n\nFinally\, we will present our
  ongoing work on uncertainty quantification in neural networks\, sequentia
 l fine-tuning of Bayesian transformers\, and language-guided control. At t
 he core of our approach are analytic solutions for the moments of random v
 ariables passed through nonlinear activation functions. These solutions en
 able a moment propagation method that tracks mean vectors and covariance m
 atrices across networks\, providing sample-free tools for uncertainty quan
 tification and robustness analysis. Building on this\, we introduce a doub
 le-Bayesian framework for sequential transformer fine-tuning. This framewo
 rk reformulates fine-tuning as posterior inference across layers and time 
 (data)\, which is reducible to one-pass propagation of analytic formulas\,
  eliminating the need for iterative gradient computation. These techniques
  are particularly useful for language-guided control when uncertainty esti
 mates are needed for robust decision-making.\n\nBio: Yorie Nakahira is an 
 Assistant Professor in the Department of Electrical and Computer Engineeri
 ng at Carnegie Mellon University. She received B.E. in Control and Systems
  Engineering from Tokyo Institute of Technology and Ph.D. in Control and D
 ynamical Systems from California Institute of Technology. Her research goa
 l is to develop control and learning techniques that enhance the capabilit
 ies of autonomous systems. On the algorithm side\, her group studies robus
 t and safe control\, uncertainty and risk quantification\, adaptation algo
 rithms\, and language-guided control. On the application side\, her group 
 explores diverse topics\, ranging from autonomous control systems to human
  sensorimotor control to poverty alleviation policy design. She has receiv
 ed four prestigious young investigator awards\, including the NSF CAREER\,
  and holds a part-time position at the Research and Development Center for
  Large Language Models\, where she applies control theory to LLM-jp. Her g
 roup will be recruiting Ph.D. students or postdoctoral researchers in 2026
 .\n
LOCATION:LR10\, Department of Engineering
END:VEVENT
END:VCALENDAR
