BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Data-driven calibration of linear estimators with minimal penaltie
 s\, with an application to multi-task regression - Sylvain Arlot\, École 
 Normale Supérieure\, Paris
DTSTART:20111104T160000Z
DTEND:20111104T170000Z
UID:TALK32896@talks.cam.ac.uk
CONTACT:Richard Samworth
DESCRIPTION:This talk tackles the problem of selecting among several linea
 r estimators in non-parametric regression\; this includes model selection 
 for linear regression\, the choice of a regularization parameter in kernel
  ridge regression or spline smoothing\, the choice of a kernel in multiple
  kernel learning\, the choice of a bandwidth for Nadaraya-Watson estimator
 s\, and the choice of k for k-nearest neighbors regression.\n\nWe propose 
 a new algorithm which first estimates consistently the variance of the noi
 se\, based upon the concept of minimal penalty which was previously introd
 uced in the context of model selection. Then\, plugging our variance estim
 ate in Mallows’ C_L penalty is proved to lead to an algorithm satisfying
  an oracle inequality. Simulation experiments show that the proposed algor
 ithm often improves significantly existing calibration procedures such as 
 10-fold cross-validation or generalized cross-validation.\n\nWe then provi
 de an application to the kernel multiple ridge regression framework\, whic
 h we refer to as multi-task regression. The theoretical analysis of this p
 roblem shows that the key element appearing for an optimal calibration is 
 the covariance matrix of the noise between the different tasks. We present
  a new algorithm for estimating this covariance matrix\, based upon severa
 l single-task variance estimations. We show\, in a non-asymptotic setting 
 and under mild assumptions on the target function\, that this estimator co
 nverges towards the covariance matrix. Then\, plugging this estimator into
  the corresponding ideal penalty leads to an oracle inequality. We illustr
 ate the behaviour of our algorithm on synthetic examples.\n\n\nThis talk i
 s based on two joint works with Francis Bach and Matthieu Solnon:\n\nS. Ar
 lot\, F. Bach. Data-driven Calibration of Linear Estimators with Minimal\n
 Penalties. arXiv:0909.1884\n\nM. Solnon\, S. Arlot\, F. Bach. Multi-task R
 egression using Minimal Penalties.\narXiv:1107.4512\n\n
LOCATION:MR12\, CMS\, Wilberforce Road\, Cambridge\, CB3 0WB
END:VEVENT
END:VCALENDAR
