BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Modeling and Predicting Emotion in Music - Erik M. Schmidt (Drexel
  University)
DTSTART:20121018T101500Z
DTEND:20121018T110000Z
UID:TALK40870@talks.cam.ac.uk
CONTACT:Vaiva Imbrasaite
DESCRIPTION:With the explosion of vast and easily-accessible digital music
  libraries over the past decade\, there has been a rapid expansion of rese
 arch towards automated systems for searching and organizing music and rela
 ted data. Online retailers now o er vast collections of music\, spanning t
 ens of millions of songs\, available for immediate download. While these o
 nline stores present a drastically different dynamic than the record store
 s of the past\, consumers still arrive with the same requests - recommenda
 tion of music that is similar to their tastes\; for both recommendation an
 d curation\, the vast digital music libraries of today necessarily require
  powerful automated tools.\n\nThe medium of music has evolved specically 
 for the expression of emotions\, and it is natural for us to organize musi
 c in terms of its emotional associations. But while such organization is a
  natural process for humans\, quantifying it empirically proves to be a ve
 ry difficult task. Myriad features\, such as harmony\, timbre\, interpreta
 tion\, and lyrics affect emotion\, and the mood of a piece may also change
  over its duration. Furthermore\, in developing automated systems to organ
 ize music in terms of emotional content\, we are faced with a problem that
  oftentimes lacks a well-dened answer\; there may be considerable disagre
 ement regarding the perception and interpretation of the emotions of a son
 g or even ambiguity within the piece itself.\n\nAutomatic identication of
  musical mood is a topic still in its early stages\, though it has receive
 d increasing attention in recent years. Such work offers potential not jus
 t to revolutionize how we buy and listen to our music\, but to provide dee
 per insight into the understanding of human emotions in general. This work
  seeks to relate core concepts from psychology to that of signal processin
 g to understand how to extract information relevant to musical emotion fro
 m an acoustic signal. The methods discussed here survey existing features 
 using psychology studies and develop new features using basis functions le
 arned directly from magnitude spectra. Furthermore\, this work presents a 
 wide breadth of approaches in developing functional mappings between acous
 tic data and emotion space parameters. Using these models\, a framework is
  constructed for content-based modelling and prediction of musical emotion
 .
LOCATION:Rainbow Room (SS03)\, Computer Laboratory
END:VEVENT
END:VCALENDAR
