University of Cambridge > Talks.cam > Data Science and AI in Medicine > From Explanation to Trust: Modeling and Measuring Trust in Explainable Decision Support

From Explanation to Trust: Modeling and Measuring Trust in Explainable Decision Support

Download to your calendar using vCal

If you have a question about this talk, please contact Pietro Lio .

This seminar presents my findings from doctoral research conducted at Eindhoven University of Technology, together with research carried out as a guest at the Computer Laboratory of the University of Cambridge, on human trust in AI-based decision support. The thesis investigates trust in machine learning models through three complementary studies.

Highlights include a case study on COVID -19 diagnosis, where perceived trust of medical experts—understood as self-reported trust—was modeled as a complex, context-dependent phenomenon rather than a single dimension. In the next case study on distal myopathy, interpretability quality was assessed both through radiologists’ evaluations and through objective metrics from the XAI literature. A broader human-subjects study further revealed a clear distinction between perceived trust and demonstrated trust, the latter referring to the actual delegation of decisions to AI by human users.

Across these studies, a notable gap was identified between objective metrics of explainability and expert assessments, underscoring the difficulty of aligning computational measures with professional judgment. Together, these findings highlight discrepancies between reported attitudes, expert opinion, and actual behavior, offering concrete guidance for the design of AI-based decision support that are both interpretable and trustworthy.

This talk is part of the Data Science and AI in Medicine series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity