BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Multi-scale cross-attention transformer encoder for event classifi
 cation - Mihoko Nojiri (KEK)
DTSTART:20240315T160000Z
DTEND:20240315T170000Z
UID:TALK213112@talks.cam.ac.uk
CONTACT:Benjamin Christopher Allanach
DESCRIPTION:We deploy an advanced Machine Learning (ML) environment\, leve
 raging a multi-scale \ncross-attention encoder for event classification\, 
 taking  gg→H→hh→bbbb process \nat the High Luminosity Large Hadron C
 ollider (HL-LHC) as an example. \nIn the boosted Higgs regime\, the final 
 state consists of two fat jets. \nOur multi-modal network can extract info
 rmation from the jet substructure \nand the kinematics of the final state 
 particles through self-attention \ntransformer layers. The  learned inform
 ation is subsequently integrated\nto improve classification performance us
 ing an additional transformer \nencoder with cross-attention heads. We dem
 onstrate that our approach\nsurpasses in performance current alternative M
 L methods\, whether solely \nbased on kinematic analysis or else on a comb
 ination of this with \nmainstream ML approaches. \nThen\, we employ variou
 s interpretive methods to evaluate the network\nresults\, including attent
 ion map analysis and visual representation \nof Gradient-weighted Class Ac
 tivation Mapping (Grad-CAM). The proposed\nnetwork is generic and can be a
 pplied to analyse any process carrying \ninformation at different scales.
LOCATION:***note unusual venue*** MR 9 (Pavilion B)\, CMS 
END:VEVENT
END:VCALENDAR
