BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Inductive Logic Programming - Stephen Muggleton (Imperial College 
 London)
DTSTART:20080228T160000Z
DTEND:20080228T180000Z
UID:TALK8215@talks.cam.ac.uk
CONTACT:Zoubin Ghahramani
DESCRIPTION:Inductive Logic Programming (ILP) is the area of Computer Scie
 nce which deals\nwith the induction of hypothesised predicate definitions 
 from examples and\nbackground knowledge. Logic programs are used as a sing
 le representation\nfor examples\, background knowledge and hypotheses.  IL
 P is differentiated from\nmost other forms of Machine Learning (ML) both b
 y its use of an expressive\nrepresentation language and its ability to mak
 e use of logically encoded\nbackground knowledge.  This has allowed succes
 sful applications of ILP\nin areas such as Systems Biology\, computational
  chemistry and Natural\nLanguage Processing.\n\nThe problem of learning a 
 set of logical clauses from examples\nand background knowledge has been st
 udied since Reynold's and Plotkin's\nwork in the late 1960's. The research
  area of ILP has been studied intensively\nsince the early 1990s.  This ta
 lk will provide an overview of results for\nlearning logic programs within
  the paradigms of learning-in-the-limit\,\nPAC-learning and Bayesian learn
 ing.  These results will be related to various\nsettings\, implementations
  and applications used in ILP.\n\nIt will be argued that the Bayes' settin
 g has a number of distinct advantages.\nBayes' average case results are ea
 sier to compare with empirical machine\nlearning performance than results 
 from either PAC or learning-in-the-limit.\nBroad classes of logic programs
  are learnable in polynomial\ntime in a Bayes' setting\, while correspondi
 ng PAC results\nare largely negative. Bayes' can be used to derive and ana
 lyse\nalgorithms for learning from positive only examples for classes\nof 
 logic program which are unlearnable within both the PAC and\nlearning-in-t
 he-limit framework. It will be shown how a Bayesian\napproach can be used 
 to analyse the relevance of background knowledge\nwhen learning. General r
 esults will also be discussed for\nexpected error given a k-bit bounded in
 compatibility between\nthe teacher's target distribution and the learner's
  prior.\n
LOCATION:LT2 (Inglis Building) Engineering\, Department of
END:VEVENT
END:VCALENDAR
