BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Breaking the Curse of Dimensionality with Convex Neural Networks -
  Francis Bach (INRIA Paris - Rocquencourt\; ENS - Paris)
DTSTART:20171031T145000Z
DTEND:20171031T154000Z
UID:TALK94120@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:<br><span>We consider neural networks with a single hidden lay
 er and non-decreasing positively homogeneous activation functions like the
  rectified linear units. By letting the number of hidden units grow unboun
 ded and using classical non-Euclidean regularization tools on the output w
 eights\, they lead to a convex optimization problem and we provide a detai
 led theoretical analysis of their generalization performance\, with a stud
 y of both the approximation and the estimation errors. We show in particul
 ar that they are adaptive to unknown underlying linear structures\, such a
 s the dependence on the projection of the input variables onto a low-dimen
 sional subspace. Moreover\, when using sparsity-inducing norms on the inpu
 t weights\, we show that high-dimensional non-linear variable selection ma
 y be achieved\, without any strong assumption regarding the data and with 
 a total number of variables potentially exponential in the number of obser
 vations. However\, solving this convex optimization pro blem in infinite d
 imensions is only possible if the non-convex subproblem of addition of a n
 ew unit can be solved efficiently. We provide a simple geometric interpret
 ation for our choice of activation functions and describe simple condition
 s for convex relaxations of the finite-dimensional non-convex subproblem t
 o achieve the same generalization error bounds\, even when constant-factor
  approximations cannot be found. We were not able to find strong enough co
 nvex relaxations to obtain provably polynomial-time algorithms and leave o
 pen the existence or non-existence of such tractable algorithms with non-e
 xponential sample complexities.<br></span><a target="_blank" rel="nofollow
 "><br>Related links: http://jmlr.org/papers/volume18/14-546/14-546.pdf</a>
 &nbsp\;- JMLR paper
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
