BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Training Random Forests with Ambiguously Labeled Data - Christian 
 Leistner
DTSTART:20110407T093000Z
DTEND:20110407T103000Z
UID:TALK30571@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:Nowadays\, an increasing number of computer vision application
 s rely on the usage of powerful machine learning algorithms. For the learn
 ing\, usually supervised algorithms are applied\, which demand large amoun
 ts of hand-labeled samples in order to yield accurate results. \nAlthough 
 nowadays the number of digital images is exploding\, collecting large amou
 nts of labeled data can still be tedious and\, if labeled\, the labels can
  be noisy or formatted in a way which might not be optimal to exploit by t
 he learning method - consider bounding box annotations in images. This mot
 ivates the development and usage of learning algorithms that are able to e
 xploit both small amounts of labeled data and large amounts of unlabeled d
 ata\, which are usually easy to get\, and\, additionally\, allow for a cer
 tain amount of flexibility in the labeling.\n\nIn this talk\, I will show 
 how to use Random Forests (RFs) to tackle these challenges. RFs are able t
 o deliver state-of-the-art results in various applications. They are fast 
 in both training and evaluation\, are inherently multi-class\, run on para
 llel architectures and are robust to label noise. This makes them perfect 
 candidates to exploit large amounts of unlabeled or ambiguously labeled sa
 mples. In contrast\, they demand large amounts of data to leverage their f
 ull potential\, which in turn motivates the incorporation of unlabeled sam
 ples into their training. In particular\, I will present extensions of RFs
  to semi-supervised and multiple-instance learning as well as to online le
 arning\, which is needed in many applications. Finally\, I will present a 
 new method that is able to benefit from unlabeled data\, even when the sam
 ples are coming from different distributions or the samples are only weakl
 y-related to the actual task.\n
LOCATION:Small lecture theatre\, Microsoft Research Ltd\, 7 J J Thomson Av
 enue (Off Madingley Road)\, Cambridge
END:VEVENT
END:VCALENDAR
