BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Relative representations enable zero-shot latent space communicati
 on - Luca Moschella\, Sapienza University of Rome
DTSTART:20230215T140000Z
DTEND:20230215T170000Z
UID:TALK197368@talks.cam.ac.uk
CONTACT:Pietro Lio
DESCRIPTION:Neural networks embed the geometric structure of a data manifo
 ld lying in a high-dimensional space into latent representations. Ideally\
 , the distribution of the data points in the latent space should depend on
 ly on the task\, the data\, the loss\, and other architecture-specific con
 straints. However\, factors such as the random weights initialization\, tr
 aining hyperparameters\, or other sources of randomness in the training ph
 ase may induce incoherent latent spaces that hinder any form of reuse. Nev
 ertheless\, we empirically observe that\, under the same data and modeling
  choices\, distinct latent spaces typically differ by an unknown quasi-iso
 metric transformation: that is\, in each space\, the distances between the
  encodings do not change. In this work\, we propose to adopt pairwise simi
 larities as an alternative data representation\, that can be used to enfor
 ce the desired invariance without any additional training. We show how neu
 ral architectures can leverage these relative representations to guarantee
 \, in practice\, latent isometry invariance\, effectively enabling latent 
 space communication: from zero-shot model stitching to latent space compar
 ison between diverse settings. We extensively validate the generalization 
 capability of our approach on different datasets\, spanning various modali
 ties (images\, text\, graphs)\, tasks (e.g.\, classification\, reconstruct
 ion) and architectures (e.g.\, CNNs\, GCNs\, transformers)
LOCATION:Lecture Theatre 2
END:VEVENT
END:VCALENDAR
