Hacker News new | past | comments | ask | show | jobs | submit login
Hutter Prize (wikipedia.org)
111 points by jonbaer on March 24, 2021 | hide | past | favorite | 48 comments



It should be noted that there has been the relaxation of 5,000 bytes per day since January 2021 [1]. Since the submission is required to have at least 1% improvement over the last valid submission plus the relaxation, the last winner can submit the virtually same submission after 234 (= ceil(1,166,736 / 5,000)) days of inactivity to claim the award. In practice this would mean that a smaller tweak that was so far unable to pass the minimum improvement threshold is now eligible.

[1] http://prize.hutter1.net/hfaq.htm#ddisc


the idea that AI and data compression are isomorphic is really interesting and makes me think of Ashby's law of requisite variety, where the number of possible states (this could also be rendered in information-theoretic terms of entropy) of an agent operating in an environment must be at least as high as the number of states of the environment itself. Thus, to navigate a given environment is to be able to represent that environment internally. Reducing the byte-size of that representation then means that you are representing the environment more tersely, and in my experience (there's probably a theoretical explanation for this but I've only seen it experientially) terser-but-just-as-accurate representations are usually better models.


This paper [0] discusses similar ideas from a neuroscience perspective of feature extraction in deeper layers and argues the possible significance of minimizing the representation decision space in one-shot learning paradigm

[0] https://www.biorxiv.org/content/10.1101/2021.03.21.436284v1....


having thought about it a bit, I think the idea of terser accurate models being "better" is the core of the idea of "beauty" in programming (and probably elsewhere).


Schmidhuber thinks a similar thing https://people.idsia.ch/~juergen/beauty.html (he's also worked with Hutter on AI https://people.idsia.ch/~juergen/unilearn.html )


fascinating, thanks for sharing. The idea that a formal theory of beauty could belong within the memeplex of complexity theory is really interesting. I find that complexity theory itself is full of beautiful concepts, so perhaps it's no surprise.


I've always felt that the best non-CS programmers were physicists, because they are always looking for the simplest model (eg, the same community that rejected a 27 dimension string theory as being too complex, but don't quote me on this, I'm no physicist).


> the prize awards 5000 euros for each one percent improvement (with 500,000 euros total funding)

I get the feeling that Marcus Hutter is going to be able to keep some of his money.

EDIT: This apparently isn't the case! Turns out the prize is based on improvements of the current record, not the starting baseline. From the website's FAQ[1]:

> Theoretically you can win more than 500'000€. For instance, if you halve the current record, then halve your own record, and then again halve it, you would receive 3×250'000€

We're still not particularly likely to see a full payout, but much more likely than the way I initially interpreted it.

[1]: http://prize.hutter1.net/hfaq.htm#money


If anyone wants to try this challenge the talk by Alexander Rhatushnyak (the current #1 on leaderboard)[1] is a pretty good starting point

[1] https://www.youtube.com/watch?v=gqpsD6OKSjM


How does the competition stop a highly-specific compressor that is tuned to the particular text file (and not much else) from winning?


> The total size of the compressed file and decompressor (as a Win32 or Linux executable) must not be larger than 99% of the previous prize winning entry.

Emphasis mine. You can make a highly specific compressor, but it'll need a highly specific decompressor, which will count against your score.

In essence the competition is "create the smallest executable which produces enwiki9 as its output".


which is equivalent to approaching the kolmogorov complexity of the text, which is deeply interesting research for information theory.


Aha yes, I see


It doesn't. But the "particular text file" is a huge chunk of Wikipedia, so whatever you come up with isn't going to be as "highly specific" as you'd think.


I think whats confusing me is that they think AI is the solution, so that would tend towards a huge very clever compressor program (think something like GPT3). But presumably the competition also has limits on the size of the compressor - because you want to avoid trivial solutions like embedding the 1GB file in the compressor. So how do they manage this contradiction between wanting the compressor to be very clever while also limiting its size?


The compressor is the thing being measured; there is no separate input file or anything.

If you embed 1GB of text in that compressor, then your 'compressed size' will be 1GB (plus whatever else is in the compressor).


Just read the rules.

The size of the decompressor is included in the total size of the compressed data.


Right, but then they're not going to get a very clever compressor then. The ideal AI compressor would encompass all human knowledge.

It just seems like they've hit upon a good idea - AI/compression crossover - then shot themselves in the foot by excluding anything genuinely intelligent. I suppose the problem is how to tune the rules to allow AI but not cheating.


> very clever

> ideal

> all human knowledge

> genuinely intelligent

> allow AI but not cheating

Those all all vague, hand-wavey concepts; open to disagreement, and in some cases might turn out not to exist or make sense. Entire research careers have been spent trying to even define these terms, let alone implement them.

Focusing on compression of a particular chunk of Wikipedia eliminates all of that, and gives us a precisely measurable quantity.

Is it a perfect defininition of intelligence? No; it was never meant to be.

Is it a runnable, measurable, comparable experiment? Yes.


The ideal AI compressor would encompass all human knowledge.

That's what they're doing (in the constraints of defining all human knowledge ~ English Wikipedia). More refined models of representing all of our knowledge will beat weaker ones in this test.

Whether this is AI or intelligence or not depends on how you define those terms - but to win this competition your compressor has to be able to anticipate the rest of the human knowledge fairly accurately based on having seen part of it.


Ok I see the idea now. The winning program is currently 15meg or so. I can see how that means _something_.

But I'm wondering if starting with a 1tb knowledge file might lead to more interesting ai outcomes as it would allow for larger models to be in play.


the size of the program is counted as part of the compressed size


Or on the other side, if there was a procedural generator that would generate the required output from some short seed, that would basically win the game for good.

But I guess the possibility of such a generator existing (within the size constraints) for this specific text is pretty much zero.


You can't produce information out of thin air, so basically your "generator" would need to store all information in the file somehow, and it boils down to writing good compressor to store it.


You can't produce any arbitrary information out of thin air, but you could produce a specific one by chance.

E.g. a Minecraft world seed carries very little data compared to the world which the world generator consistently generates out of it.

The catch is that only a miniscule fraction of all the possible worlds can be actually generated since there is a finite amount of coresponding seeds.


Storing arbitrary information via RNG configuration seems improbable.


I think its unfortunate that many people look to size alone when considering data compression. Decompression speed is also an important factor. For example Xz usually have better compression than Zstandard by about 8%, but at the same time, Zstandard can usually decompress about 10 times faster. But without failure, people always try to rationalize the slightly smaller output and hand wave away any other factors.


First, the competition does place a time limit (in fact several resource limits). I agree that it may be too lax to judge consumer-oriented compression algorithms. That is not what this competition is about.

The goal is maximizing compression ratio - with the aim of affecting AI. Since an AI system can be seen as a lossy compression where some worldly input is compressed into something that we hope mimics "understanding". Improvements in compression may translate to AI advances [0] and potentially vice versa.

To draw a parallel away from computing:

One could say we are compressors that experience the world and compress it to loose what we don't care about. Improving our compression is basically akin to "knowing more" or "remembering more of the right things". See Schmidhuber [1].

[0] http://prize.hutter1.net/hfaq.htm#compai

[1] https://arxiv.org/abs/0812.4360


> One could say we are compressors that experience the world and compress it to loose what we don't care about.

That's how lossy compression works but this competition is about lossless compression. Makes me wonder if the competition should really be about lossy compression, as it's pretty obvious our minds aren't lossless compressors.


This argument has been argued over and over and the FAQ offers counterarguments [1]. Also if you can build a good lossy compressor it's relatively easy to convert into a lossless one (residual coding), but the inverse does not hold in general.

[1] http://prize.hutter1.net/hfaq.htm#lossless


I agree with this in many cases, but I read some of the Hutter Prize documentation and it seems like the people running the prize have very specific reasons for neglecting compressor and decompressor runtimes for this contest (although I'm not sure I could do justice to them).


You make a good point. I've often wondered why there aren't compression container formats that can do magic like handle media file compression differently (using lossless formats), but both the time to compress/decompress would be prohibitive (and, personally speaking, I'd just leave it in the smaller lossless format assuming my players will play it).


From a business perspective, decompression complexity wouldn’t be the biggest issue if it didn’t cause a noticeable slowdown in processing on the client (the one decompressing). At a SaaS for example, if you can save 30% on outbound transfer, it might be worth a +10ms time-to-paint


So would something like tensorflow-compress[1] v4 (at some point) be the ideal candidate for breaking the current record?

[1] https://github.com/byronknoll/tensorflow-compress


The current leader uses neural networks (LSTMs I believe) so that's obviously a good strategy. But since the size of the decoder gets counted, vanilla tensorflow is probably a bad choice.


you can submit your compressor compressed too, if tensorflow networks lend themselves to compression.


For now GPU is prohibited, as high performance GPGPU capable of ML is not that widespread.


The Wikipedia article on the Hutter Prize should have a cross-reference to the Loebner Prize as there's a close analogy and some of the same criticism applies, I think.


That analogy doesn't work. Most criticisms to the Loebner Prize boil down to the human judgement (and agents that exploit this aspect without actual intelligence) and the Hutter Prize doesn't have any. The AI connection in the Huffer Prize is, while still up to debate, at best suggestive and not a goal itself.


There are some important differences, and the subjectivity of human judgement is a major one. Also, compression is an interesting goal in its own right. But if the Hutter Prize is proposed as a way of encouraging AI research then I still claim that some of the criticism of the Loebner Prize is applicable.

To me it seems doubtful whether compression of a 1 GB text corpus could benefit from AI even in theory: if you can get it down to about 15 MB without AI then any AI would have a very tight budget. Perhaps compression of a 1 TB text corpus could benefit from AI in theory, but that doesn't mean that people working on AI would have much chance of winning the prize in the foreseeable future because there might be lots of non-AI techniques that provide a bigger improvement in the short or medium term. (A similar criticism was made against the Loebner Prize in the 1990s. Sorry I can't immediately find a link to it.)


The argument is that AI and compression are isomorphic, so in that view there is no such thing as non-AI compression techniques. And in fact if you look at how common compressors work, they build a good model of the next token, given the previous tokens, exactly the same as language models in AI.


> Welcome to Wikipedia, the free encyclopedia that anyone can edit.

Hint :)


wget http://mattmahoney.net/dc/enwik9.zip

gunzip enwik9.zip

Where do I claim my money?

Edit: alternate solution[1]:

cat /dev/urandom

[1] this solution only works some of the time


Taken literally, this comment feels like a joke -- which is probably why some HN people downvoted it.

However, think about the first part of this comment from an AI point of view. The program itself is the compression. An intelligent agent could discover this kind of program.

Notes: This isn't perfect, since it depends on having network access, reliability, and so on. However, maximally intelligent agents, in my view, know how to balance situational and environmental factors in order to solve a problem.


Maybe you should read the competition rules that explicitly specify what input is permitted.


Per HN guidelines [1], it would be better and nicer to say something like this: "the competition rules disallow input from other sources (files, network, dictionaries, etc.)" -- http://prize.hutter1.net/hrules.htm

[1] The HN guidelines write:

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that." -- https://news.ycombinator.com/newsguidelines.html


I'm not telling the author to read the article, I'm telling them to read the actual rules which anticipated this trivial attempt to be clever. Per HN guidelines [1], it would have been better to do that than to post a shallow dismissal of the idea.

The article only contains a paraphrasing of the rules missing these essential details. Per HN guidelines [2], it would have been better to post the original link and avoid the confusion that you now also fell for.

https://news.ycombinator.com/newsguidelines.html

[1] Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

[2] Please submit the original source. If a post reports on something found on another site, submit the latter.

The HN rules aren't terrible, but with no-one actually following them nor having followed them for as long as this site exists I don't see the point in bandying them about to make pointless comments.


Thanks for the polite reply. I really did read the rules, I must have just missed the part about “external sources”.

Of course, since standard libraries are allowed, the logical way to win the challenge is for someone who has commit access to a standard library to include the text as part of an update to that library.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: