IBM®
Skip to main content
    Country/region [change]    Terms of use
 
 
 
    Home    Products    Services & solutions    Support & downloads    My account    

IBM Technical Journals

Special report: Celebrating 50 years of the IBM Journals
All topics > Fundamental Science and Mathematics >

Algorithmic information theory

Award plaque by G. J. Chaitin

This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.

Originally published:

IBM Journal of Research and Development, Volume 21, Issue 4, pp. 350-359 (1977).

Significance:

In 1931 Kurt Godel showed that there are true mathematical assertions that cannot be proved within any given formal axiomatic theory. In 1936 Alan Turing showed that the answers to some questions about computer programs cannot be computed by any algorithm. This paper reviews algorithmic information theory, which builds on the work of Godel and Turing and shows that there are areas of pure mathematics which have absolutely no structure or pattern. In other words, randomness occurs not only in subatomic physics, but even in pure math. In such cases only statistical laws apply.

This highly cited paper was a milestone in the field of computational complexity, an important subdomain of theoretical computer science.

Comments:


    About IBMPrivacyContact