## Algorithmic information theory## From CryptoDox, The Online Encyclopedia on Cryptography and Information Security
## OverviewAlgorithmic information theory principally studies Kolmogorov complexity and other complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers and real numbers. The term "information" may be a bit misleading, as it is used in a way which is synonymous with "incompressibility" - a string has high information content, from the point of view of algorithmic information theory, if the information it contains cannot be expressed more compactly. One reason this may be misleading is that, from this point of view, a 3000 page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know, more or less, what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present. For this reason, high-information strings and sequences are sometimes called "random"; people also sometimes attempt to distinguish between "information" and "useful information" and attempt to provide rigorous definitions for the latter, with the idea that the random letters may have more information than the encyclopedia, but the encyclopedia has more "useful" information. Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant ## HistoryThe field was developed by Andrey Kolmogorov, Ray Solomonoff and Gregory Chaitin, starting in the late 1960s. There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. Chaitin's account of the history of AIT. ## Precise Definitions
(Related definitions can be made for alphabets other than the set <math>\{0,1\}</math>.) ## See also- Kolmogorov complexity
- Algorithmically random sequence
- Algorithmic probability
- Chaitin's constant
- Chaitin–Kolmogorov randomness
- Computationally indistinguishable
- Distribution ensemble
- Epistemology
- Invariance theorem
- Limits to knowledge
- Minimum description length
- Minimum message length
- Pseudorandom ensemble
- Pseudorandom generator
- Uniform ensemble
## References<references/> |