BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Weighted evaluation of probabilistic forecasts - Sam Allen (Karlsr
 uhe Institute of Technology (KIT))
DTSTART:20250605T101500Z
DTEND:20250605T111500Z
UID:TALK230818@talks.cam.ac.uk
DESCRIPTION:The evaluation of probabilistic forecasts focuses on two aspec
 ts of forecast performance: forecast accuracy and forecast calibration. Fo
 recast accuracy refers to how 'close' the forecast is to the corresponding
  observation\, which can be quantified using proper scoring rules\, while 
 forecast calibration considers to what extent probabilistic forecasts are 
 trustworthy. Most scoring rules and checks for calibration treat all possi
 ble outcomes equally. However\, certain outcomes are often of more interes
 t than others\, and these outcomes should therefore be emphasised during f
 orecast evaluation. For example\, extreme outcomes typically lead to the l
 argest impacts on forecast users\, making accurate and calibrated forecast
 s for these outcomes particularly valuable. In this talk\, we discuss meth
 ods to focus on particular outcomes when evaluating probabilistic forecast
 s. We review weighted scoring rules\, which allow practitioners to incorpo
 rate a weight function into conventional scoring rules when calculating fo
 recast accuracy\, and demonstrate that the theory underlying weighted scor
 ing rules can readily be extended to checks for forecast calibration. Just
  as proper scores can be decomposed to obtain a measure of forecast miscal
 ibration\, weighted scores can be decomposed to yield a measure of weighte
 d forecast calibration.
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
