HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Information Science and Technology / Faculty of Information Science and Technology >
Peer-reviewed Journal Articles, etc >

Asymptotics of Discrete MDL for Online Prediction

Files in This Item:
01522640.pdfmain444.94 kBPDFView/Open
01603798.pdferratum26.27 kBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/8468

Title: Asymptotics of Discrete MDL for Online Prediction
Authors: Poland, Jan Browse this author
Hutter, Marcus Browse this author
Keywords: algorithmic information theory
classification
consistency
discrete model class
loss bounds
minimum description length (MDL)
regression
sequence prediction
stabilization
universal induction
Issue Date: Nov-2005
Publisher: IEEE
Journal Title: IEEE Transactions on Information Theory
Volume: 51
Issue: 11
Start Page: 3780
End Page: 3795
Publisher DOI: 10.1109/TIT.2005.856956
Abstract: Minimum description length (MDL) is an important principle for induction and prediction, with strong relations to optimal Bayesian learning. This paper deals with learning processes which are not necessarily independent and identically distributed, by means of two-part MDL, where the underlying model class is countable. We consider the online learning framework, i.e., observations come in one by one, and the predictor is allowed to update its state of mind after each time step.We identify two ways of predicting by MDL for this setup, namely, a static and a dynamic one. (A third variant, hybrid MDL, will turn out inferior.)We will prove that under the only assumption that the data is generated by a distribution contained in the model class, the MDL predictions converge to the true values almost surely. This is accomplished by proving finite bounds on the quadratic, the Hellinger, and the KullbackLeibler loss of the MDL learner, which are, however, exponentially worse than for Bayesian prediction. We demonstrate that these bounds are sharp, even for model classes containing only Bernoulli distributions. We show how these bounds imply regret bounds for arbitrary loss functions. Our results apply to a wide range of setups, namely, sequence prediction, pattern classification, regression, and universal induction in the sense of algorithmic information theory among others.
Description: Erratum published: IEEE Transactions on Information Theory. Volume 52, Issue 3, March 2006, p.1279
Description URI: https://doi.org/10.1109/TIT.2006.869753
Rights: (c) 2005-2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Type: article
URI: http://hdl.handle.net/2115/8468
Appears in Collections:情報科学院・情報科学研究院 (Graduate School of Information Science and Technology / Faculty of Information Science and Technology) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: Jan Poland

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University