University of Cambridge > Talks.cam > Information Theory Seminar > The Minimum Description Length Principle and Machine Learning

The Minimum Description Length Principle and Machine Learning

Download to your calendar using vCal

  • UserDr Yoshinari Takeishi, Kyushu University
  • ClockWednesday 28 January 2026, 14:00-15:00
  • HouseMR5, CMS Pavilion A.

If you have a question about this talk, please contact Prof. Ramji Venkataramanan .

The Minimum Description Length (MDL) principle states that good learning can be achieved by selecting the model that provides the shortest description of the observed data. It is a key concept that bridges information theory and machine learning, enabling us to understand increasingly important machine learning problems from an information-theoretic viewpoint. In this talk, we first review methods for efficient lossless compression of data generated from an unknown probability distribution (universal coding), with a particular focus on two-stage (two-part) coding. We then introduce the MDL estimator based on two-stage codes and explain how it relates to standard learning formulations. Finally, we present a theorem by Barron and Cover that provides a generalization guarantee for this MDL estimator, thereby offering a rigorous mathematical justification for applying the MDL principle in machine learning.

This talk is part of the Information Theory Seminar series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Š 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity