BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:On Vanishing Gradients\, Over-Smoothing\, and Over-Squashing in GN
 Ns: Bridging Recurrent and Graph Learning - Alvaro Arroyo\, University of 
 Oxford
DTSTART:20250306T170000Z
DTEND:20250306T174500Z
UID:TALK229219@talks.cam.ac.uk
CONTACT:Pietro Lio
DESCRIPTION:Graph Neural Networks (GNNs) are models that leverage the grap
 h structure to transmit information between nodes\, typically through the 
 message-passing operation. While widely successful\, this approach is well
  known to suffer from the over-smoothing and over-squashing phenomena\, wh
 ich result in representational collapse as the number of layers increases 
 and insensitivity to the information contained at distant and poorly conne
 cted nodes\, respectively. In this paper\, we present a unified view of th
 ese problems through the lens of vanishing gradients\, using ideas from li
 near control theory for our analysis. We propose an interpretation of GNNs
  as recurrent models and empirically demonstrate that a simple state-space
  formulation of a GNN effectively alleviates over-smoothing and over-squas
 hing at no extra trainable parameter cost. Further\, we show theoretically
  and empirically that (i) GNNs are by design prone to extreme gradient van
 ishing even after a few layers\; (ii) Over-smoothing is directly related t
 o the mechanism causing vanishing gradients\; (iii) Over-squashing is most
  easily alleviated by a combination of graph rewiring and vanishing gradie
 nt mitigation. We believe our work will help bridge the gap between the re
 current and graph neural network literature and will unlock the design of 
 new deep and performant GNNs\n\nalso: meet.google.com/bhz-scdk-unp
LOCATION:Lecture Theatre 1\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
