BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Concept Embedding Models: Beyond the Accuracy-Explainability Trade
 -off - Mateo Espinosa Zarlenga (University of Cambridge)
DTSTART:20230124T130000Z
DTEND:20230124T140000Z
UID:TALK195253@talks.cam.ac.uk
CONTACT:Mateja Jamnik
DESCRIPTION:Join us in Lecture Theatre 2 or on "Zoom":https://zoom.us/j/99
 166955895?pwd=SzI0M3pMVEkvNmw3Q0dqNDVRalZvdz09\n\nDeploying AI-powered sys
 tems requires trustworthy models supporting effective human interactions\,
  going beyond raw prediction accuracy. Concept bottleneck models promote t
 rustworthiness by conditioning classification tasks on an intermediate lev
 el of human-like concepts. This enables human interventions which can corr
 ect mispredicted concepts to improve the model's performance. However\, ex
 isting concept bottleneck models are unable to find optimal compromises be
 tween high task accuracy\, robust concept-based explanations\, and effecti
 ve interventions on concepts -- particularly in real-world conditions wher
 e complete and accurate concept annotations are scarce. In this talk I wil
 l describe Concept Embedding Models\, a novel family of concept bottleneck
  models which goes beyond the current accuracy-vs-interpretability trade-o
 ff by learning interpretable high-dimensional concept representations. Our
  experiments demonstrate that Concept Embedding Models (a) attain better o
 r competitive task accuracy w.r.t. standard neural models without concepts
 \, (b) provide concept representations capturing meaningful semantics incl
 uding and beyond their ground truth labels\, (c) support test-time concept
  interventions whose effect in test accuracy surpasses that in standard co
 ncept bottleneck models\, and (d) scale to real-world conditions where com
 plete concept supervisions are scarce.
LOCATION:Lecture Theatre 2\, Computer Laboratory\, William Gates Building 
 and Zoom
END:VEVENT
END:VCALENDAR
