
Kolmogorov Complexity – A Primer (2012) - poindontcare
https://jeremykun.com/2012/04/21/kolmogorov-complexity-a-primer/
======
cosmoharrigan
An extremely detailed treatment is presented in "An introduction to Kolmogorov
complexity and its applications":
[http://www-2.dc.uba.ar/materias/azar/bibliografia/LiVitanyi1...](http://www-2.dc.uba.ar/materias/azar/bibliografia/LiVitanyi1997AnIntroductiontoKolmogorov.pdf)

------
sn41
There are a lot of good recent books on this topic (I don't know if this area
is undergoing a revival) :

1\. N. K. Vereschagin, V. Uspensky, Alexander Shen : Kolmogorov Complexity
(English draft in preparation: [http://www.lirmm.fr/~ashen/kolmbook-
eng.pdf](http://www.lirmm.fr/~ashen/kolmbook-eng.pdf))

2\. Downey, Hirschfeldt: Algorithmic Randomness and Complexity
([http://www.springer.com/gp/book/9780387955674](http://www.springer.com/gp/book/9780387955674))

3\. Andre Nies: Computability and Randomness
([https://global.oup.com/academic/product/computability-and-
ra...](https://global.oup.com/academic/product/computability-and-
randomness-9780199230761?cc=in&lang=en&))

in addition to the now classic book by Li and Vitanyi that others have
mentioned.

------
mherrmann
When two strings have the same Kolmogorov Complexity, one of them might take
significantly longer to "decompress". Shouldn't we then say that this string
has higher information content?

It feels to me like Kolmogorov Complexity (while very elegant) might just be a
crude approximation to a measure that also takes into account the time it
takes to print the string.

~~~
cosmoharrigan
See "Logical Depth" as defined by Charles Bennett:
[http://researcher.ibm.com/researcher/files/us-
bennetc/UTMX.p...](http://researcher.ibm.com/researcher/files/us-
bennetc/UTMX.pdf) as well as Chapter 7, "Resource-Bounded Complexity", from
"An introduction to Kolmogorov complexity and its applications".

~~~
mherrmann
I'll take a look. Thanks!

------
cosmoharrigan
Kolmogorov Complexity is also presented in Chapter 7 of the classic textbook
"Elements of Information Theory":
[http://poincare.matf.bg.ac.rs/nastavno/viktor/Kolmogorov_Com...](http://poincare.matf.bg.ac.rs/nastavno/viktor/Kolmogorov_Complexity.pdf)

------
cosmoharrigan
An overview with additional references from Marcus Hutter:
[http://www.scholarpedia.org/article/Algorithmic_complexity](http://www.scholarpedia.org/article/Algorithmic_complexity)

------
woliveirajr
I've used it (by using Normalized Compression Distance) in my Master degree.

It was very interesting to find out how efficient it was in authorship
attribution, even having 100 possible authors.

------
eveningcoffee
Linux dev/random entropy quality check is (was, I have not checked it
recently) based on Kolmogorov complexity.

~~~
Houshalter
That seems unlikely because actually computing kolmogorov complexity is
impossible, even approximating it is super hard. But you can run random
numbers through compression software, and if they compress, something is very
wrong.

~~~
eveningcoffee
I am not an expert in this. Here is the reference material
[https://eprint.iacr.org/2012/487.pdf](https://eprint.iacr.org/2012/487.pdf)
and it feels plausible.

------
Ono-Sendai
If we're doing Kolmogorov complexity reposts:
[http://forwardscattering.org/post/7](http://forwardscattering.org/post/7)
[http://forwardscattering.org/post/14](http://forwardscattering.org/post/14)

~~~
mafribe
How many times do you want people to tell you that everybody knows that
Kolmogorov complexity (KC) is only defined up to a constant?

This does _not_ affect the results that people use KC for, like the
incompressibility of most strings.

~~~
Ono-Sendai
That's what my second post linked above addresses. I think it _does_ affect
the 'incompressibility of most strings' result. I'm fine with not rehashing
the argument though, unless you're keen to do so :)

~~~
mafribe
What you need to do is produce a programming language L such that the
existence of incompressibility of strings becomes false, e.g. contradict
Theorem 2.2.1 of Li & Vitanyi.

