BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Global Explainability of GNNs via Logic Combination of Learned Con
 cepts - Steve Azzolin
DTSTART:20221125T170000Z
DTEND:20221125T180000Z
UID:TALK192992@talks.cam.ac.uk
CONTACT:Pietro Lio
DESCRIPTION:While instance-level explanation of GNN is a well-studied prob
 lem with plenty of approaches being developed\, providing a global explana
 tion for the behavior of a GNN is much less explored\, despite its potenti
 al in interpretability and debugging. Existing solutions either simply lis
 t local explanations for a given class\, or generate a synthetic prototypi
 cal graph with maximal score for a given class\, completely missing any co
 mbinatorial aspect that the GNN could have learned.\nIn this work\, we pro
 pose GLGExplainer (Global Logic-based GNN Explainer)\, the first Global Ex
 plainer capable of generating explanations as arbitrary Boolean combinatio
 ns of learned graphical concepts. GLGExplainer is a fully differentiable a
 rchitecture that takes local explanations as inputs and combines them into
  a logic formula over graphical concepts\, represented as clusters of loca
 l explanations. \nContrary to existing solutions\, GLGExplainer provides a
 ccurate and human-interpretable global explanations that are aligned with 
 ground-truth explanations (on synthetic data) or match existing domain kno
 wledge (on real-world data). Extracted formulas are faithful to the model 
 predictions\, to the point of providing insights into some occasionally in
 correct rules learned by the model\, making GLGExplainer a promising diagn
 ostic tool for learned GNNs.\n\nhttps://zoom.us/j/99166955895?pwd=SzI0M3pM
 VEkvNmw3Q0dqNDVRalZvdz09
LOCATION:Online (Zoom)
END:VEVENT
END:VCALENDAR
