BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Automatic Identification of Samples in Hip-Hop Music via Deep Metr
 ic Learning and an Artificial Dataset - Huw Cheston (University of Cambrid
 ge)
DTSTART:20241119T170000Z
DTEND:20241119T180000Z
UID:TALK223420@talks.cam.ac.uk
CONTACT:125293
DESCRIPTION:*Abstract*\n\nSampling\, the practice of reusing recorded musi
 c or sounds from another source in a new work\, is common in popular music
  genres like hip-hop and rap. Numerous services have emerged that allow us
 ers to identify connections between samples and the songs that incorporate
  them\, with the goal of enhancing music recommendation. Designing a syste
 m that can perform the same task automatically is challenging\, however\, 
 as samples are commonly altered with audio effects like pitch shifting or 
 filtering\, and may only be several seconds long. Progress on this task ha
 s also been blocked due to the availability of training data. Here we show
  that a convolutional neural network trained on an artificial dataset can 
 identify real-world samples in commercial hip-hop music. We extract vocal\
 , harmonic\, and percussive elements from several databases of non-commerc
 ial music recordings using audio source separation\, and train the model t
 o fingerprint a subset of these elements in transformed versions of the or
 iginal audio. We optimize the model using a joint classification and metri
 c learning loss and show that it achieves 13\\% greater precision on real-
 world instances of sampling than a fingerprinting system using acoustic la
 ndmarks\, and that it can recognize samples that have been both pitch shif
 ted and time stretched. We also show that\, for half of the commercial mus
 ic recordings we tested\, our model is capable of locating the position of
  a sample to within five seconds. More broadly\, our results demonstrate h
 ow machine listening models can perform audio retrieval tasks previously r
 eserved for experts.\n\n*Biography*\n\nHuw Cheston is a PhD student at the
  Centre for Music and Science\, University of Cambridge\, focussing on mus
 ic information retrieval. His PhD research uses large-scale quantitative a
 nd computational methods to investigate performance style in improvised mu
 sic\, drawing from audio signal processing\, machine learning\, data scien
 ce\, and corpus analysis. He is also interested in developing reusable sof
 tware\, models\, and datasets that can be deployed by researchers across a
  broad variety of audio-related domains. His research has been published i
 n journals including Royal Society Open Science\, Transactions of the Inte
 rnational Society of Music Information Retrieval\, and Music Perception. T
 he work Huw will be presenting at this seminar derives from research compl
 eted as an intern in Spotify's "Audio Intelligence laboratory":https://res
 earch.atspotify.com/audio-intelligence during Summer 2024.\n\n*Zoom link*\
 n\nhttps://zoom.us/j/99433440421?pwd=ZWxCQXFZclRtbjNXa0s2K1Q2REVPZz09 (Mee
 ting ID: 994 3344 0421\; Passcode: 714277)
LOCATION:CMS computer room\, Faculty of Music (11 West Road\, Cambridge\, 
 CB3 9DP)
END:VEVENT
END:VCALENDAR
