BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Explainable AI in Neuroscience: From Interpretability to Biomarker
  Discovery - Mike Mamalakis (University of Cambridge)
DTSTART:20250513T120000Z
DTEND:20250513T130000Z
UID:TALK231217@talks.cam.ac.uk
CONTACT:Mateja Jamnik
DESCRIPTION:Explainability plays a pivotal role in building trust and fost
 ering the adoption of artificial intelligence (AI) in healthcare\, particu
 larly in high-stakes domains like neuroscience where decisions directly af
 fect patient outcomes.  While progress in AI interpretability has been sub
 stantial\, there remains a lack of clear\, domain-specific guidelines for 
 constructing meaningful and clinically relevant explanations.\nIn this tal
 k\, I will explore how explainable AI (XAI) can be effectively integrated 
 into neuroscience applications. I will outline practical strategies for le
 veraging interpretability methods to uncover novel patterns in neural data
 \, and discuss how these insights can inform the identification of emergin
 g biomarkers. Drawing on recent developments\, I will highlight adaptable 
 XAI frameworks that enhance transparency and support data-driven discovery
 . \nTo validate these concepts\, I will present illustrative case studies 
 involving large language models (LLMs) and vision transformers applied to 
 neuroscience. These examples serve as proof of concept\, showcasing how ex
 plainable AI can not only translate complex model behavior into human-unde
 rstandable insights\, but also support the discovery of novel patterns and
  potential biomarkers relevant to clinical and research applications.\n\n"
 You can also join us on Zoom":https://cam-ac-uk.zoom.us/j/83400335522?pwd=
 LkjYvMOvVpMbabOV1MVTm8QU6DrGN7.1
LOCATION:Lecture Theatre 2\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
