Hacker News new | past | comments | ask | show | jobs | submit login

You can’t keep the reference Turing machine fixed in light of the linear speed up theorem.

Hence the argument for compression.

A Turing machine with a better language is faster.

It compresses time.




Yes, I agree there is always a faster Turinng machine for a particular target. With this sort of argument, all bitstrings have a compression of zero bits. I.e. you pick a TM that has the target bitstring as output for the null string. And if it starts with the target on the output tape, there is also zero time. Thus, every bitstring can be generated with zero space and time, with some TM.

So, thats why when talking about compression in general, we keep the reference TM fixed.

When testing, there is the possibility with a single test maybe we have the wrong reference machine. But as the number of tests grows, that probability approaches zero. So, with enough testing we can eliminate the linear speedup theorem loophole, at least with high probability.

As a practical application of this sort of thing, check out the 'normalized information distance'. It is based on algorithmic mutual information, which in theory is not approximatable, as you argue. However, in practice it works surprisingly well, even without prior knowledge of the domain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: