Weighted evaluation of probabilistic forecasts
- đ¤ Speaker: Sam Allen (Karlsruhe Institute of Technology (KIT))
- đ Date & Time: Thursday 05 June 2025, 11:15 - 12:15
- đ Venue: Seminar Room 1, Newton Institute
Abstract
The evaluation of probabilistic forecasts focuses on two aspects of forecast performance: forecast accuracy and forecast calibration. Forecast accuracy refers to how ‘close’ the forecast is to the corresponding observation, which can be quantified using proper scoring rules, while forecast calibration considers to what extent probabilistic forecasts are trustworthy. Most scoring rules and checks for calibration treat all possible outcomes equally. However, certain outcomes are often of more interest than others, and these outcomes should therefore be emphasised during forecast evaluation. For example, extreme outcomes typically lead to the largest impacts on forecast users, making accurate and calibrated forecasts for these outcomes particularly valuable. In this talk, we discuss methods to focus on particular outcomes when evaluating probabilistic forecasts. We review weighted scoring rules, which allow practitioners to incorporate a weight function into conventional scoring rules when calculating forecast accuracy, and demonstrate that the theory underlying weighted scoring rules can readily be extended to checks for forecast calibration. Just as proper scores can be decomposed to obtain a measure of forecast miscalibration, weighted scores can be decomposed to yield a measure of weighted forecast calibration.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Sam Allen (Karlsruhe Institute of Technology (KIT))
Thursday 05 June 2025, 11:15-12:15