How good is your classifier? Revisiting the role of evaluation metrics in machine learning
- đ¤ Speaker: Sanmi Koyejo, University of Illinois
- đ Date & Time: Wednesday 31 July 2019, 11:00 - 12:00
- đ Venue: Auditorium, Microsoft Research Ltd, 21 Station Road, Cambridge, CB1 2FB
Abstract
With the increasing integration of machine learning into real systems, it is crucial that trained models are optimized to reflect real-world tradeoffs. Increasing interest in proper evaluation has led to a wide variety of metrics employed in practice, often specially designed by experts. However, modern training strategies have not kept up with the explosion of metrics, leaving practitioners to resort to heuristics. To address this shortcoming, I will present a simple, yet consistent post-processing rule which improves the performance of trained binary, multilabel, and multioutput classifiers. Building on these results, I will propose a framework for metric elicitation, which addresses the broader question of how one might select an evaluation metric for real world problems so that it reflects true preferences.
Series This talk is part of the Frontiers in Artificial Intelligence Series series.
Included in Lists
- All Talks (aka the CURE list)
- Auditorium, Microsoft Research Ltd, 21 Station Road, Cambridge, CB1 2FB
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Chris Davis' list
- Datalog for Program Analysis: Beyond the Free Lunch
- Frontiers in Artificial Intelligence Series
- Guy Emerson's list
- Interested Talks
- Microsoft Research Cambridge, public talks
- ndk22's list
- ob366-ai4er
- Optics for the Cloud
- personal list
- PMRFPS's
- rp587
- School of Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Sanmi Koyejo, University of Illinois
Wednesday 31 July 2019, 11:00-12:00