BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Certified dimension reduction of the input parameter space of vect
 or-valued functions - Olivier Zahm (Massachusetts Institute of Technology)
DTSTART:20180308T144500Z
DTEND:20180308T153000Z
UID:TALK102061@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:<span>Co-authors: Paul Constantine		(University of Colorado)\,
  Cl&eacute\;mentine Prieur		(University Joseph Fourier)\, Youssef Marzouk	
 	(MIT)        <br></span><br>Approximation of multivariate functions is a 
 difficult task when the number of input parameters is large. Identifying t
 he directions where the function does not significantly vary is a key prep
 rocessing step to reduce the complexity of the approximation algorithms. <
 br><br>Among other dimensionality reduction tools\, the active subspace is
  defined by means of the gradient of a scalar-valued function. It can be i
 nterpreted as the subspace in the parameter space where the gradient varie
 s the most. In this talk\, we propose a natural extension of the active su
 bspace for vector-valued functions\, e.g. functions with multiple scalar-v
 alued outputs or functions taking values in function spaces. Our methodolo
 gy consists in minimizing an upper-bound of the approximation error obtain
 ed using Poincar&eacute\;-type inequalities. <br><span><br>We also compare
  the proposed gradient-based approach with the popular and widely used tru
 ncated Karhunen-Lo&egrave\;ve decomposition (KL). We show that\, from a th
 eoretical perspective\, the truncated KL can be interpreted as a method wh
 ich minimizes a looser upper bound of the error compared to the one we der
 ived. Also\, numerical comparisons show that better dimension reduction ca
 n be obtained provided gradients of the function are available.&nbsp\;</sp
 an>
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
