Hacker News new | past | comments | ask | show | jobs | submit login

You might be interested in the No Free Lunch Theorem (https://en.wikipedia.org/wiki/No_free_lunch_theorem).

For what I skimmed from your paper, it looks like the LH agents may be viewed as discrete optimization processes trying to optimize an objective/utility function across an infinite space of possible environments (infinite voters).

If it is the case, and if each environment vote has the same weight, you may be in a case of no free lunch, where the performances of all possible agents (including the random agent) will average to the same across all possible environments.

Or, to restate the above, for each environment in which an agent is doing well, it is possible to construct an "anti-environment" where the agent is performing exactly as bad.

My personal opinion on the topic of AGI is that it is actually a case of NFLT.




I think you're right to bring up the NFLT, but I don't think it is applicable, it just points at the real question.

The key assumption to get the NFLT is that each environment vote has the same weight, i.e. we are targeting a uniform distribution on objective functions / environments / problems / whatever you call it.

If you break this assumption, you get an opposite result which is that search algorithms divide into some equivalence classes determined by the sets of different outcomes (traces, if I remember the theorem's description) that you discriminate between.

A uniform distribution like this is actually a very very strong precondition; it implies (looking at results about the complexity of sets of strings, since choosing an environment is like choosing a string from 2^N given some encoding, etc) that you care equally about a very large number of environments most of which have no compressible structure or equivalently have a huge kolmogorov complexity. Most of these environments would not have a compact encoding, relative to a particular choice of machine, but we are weighing these the same as those environments which are actually implementable using less than a ridiculous amount of storage to represent the function.

The reason why I think this is too strong an assumption to use is then that we don't care about all these quadrillion problems which have no compact encoding - we know this because we literally can't encounter them as they would be too large to ever write down using ordinary matter.

Allowing for this, talking usefully about evaluating an AGI or equivalently a search strategy or optimization algorithm implies having an understanding of the distribution of environments / problems we care about. I think capturing this concept in a 'neat' way would be a significant contribution; I had a go during my PhD but failed to get anywhere. Unfortunately things like K-complexity are uncomputable, so reasoning about distributions in those terms is a dead-end.


Right, the environments are not uniformly distributed. In fact, the paper actually defines not one single intelligence comparator but an infinite family, parametrized by a hyperparameter which is, essentially, a choice of which environments vote and how to count their votes. Crucially, this doesn't change the truth of the structural theorems (except that some of the theorems require the hyperparameter satisfy certain constraints).

Other authors (Legg and Hutter, 2007) followed the line of reasoning in your comment much more literally. They proposed to measure the intelligence of an agent as the infinite sum of the expected rewards the agent achieves on each computable environment, weighted by 2^-K where K is the environment's Kolmogorov complexity. Which seems as if it gives "one true measure" of intelligence, but actually that isn't the case at all, because Kolmogorov complexity depends on a reference universal Turing machine (Hutter himself eventually acknowledged how big a problem this is for his definition, Leike and Hutter, 2015).

My position is that any attempt to come up with "one true comparison of intelligence" (as opposed to a parametrized family) should be viewed with skepticism, because relative intelligence really must depend on a lot of arbitrary choices.


Hah, interesting - this is a reference I hadn't seen and I like the sound of it. There was me thinking I'd had an idea of my own one time!

The reference machine thing would be the next problem to argue if using 2^-K as the weight; whilst you can make the K-complexity of any particular string low by putting an instruction in your machine that is 'output the string', this is clearly cheating! So there ought to be a connection between the reference machine and some real physics, since we are perhaps not interested in building optimisers that perform well in universes whose physics is very different to ours.

Sadly even if this were cracked I think the fact that K is uncomputable would make the result likely to be useless in practise.

Thanks for your interesting reply, I enjoyed it.


The computability problem can be addressed by using Levin complexity instead of Kolmogorov complexity, an approach which you can read here: http://users.monash.edu/~dld/Publications/2010/HernandezOral...

It still suffers the problem that it's highly lopsided in favor of simpler environments. Of course you're absolutely right that environments too complex to exist in our universe should get low weight. But it's hard to find the right "Goldilocks zone" where those ultra-complex environments are discounted sufficiently but medium-complexity environments aren't overly disenfranchised, and where ultra-simple environments aren't given overwhelming authority.

>There was me thinking I'd had an idea of my own one time!

I wouldn't give up. Although it's such a long paper, Legg and Hutter 2007 actually has very little solid content: they propose the definition, and the rest of the paper is mostly filler. There are approximately zero theorems or even formal conjectures. One area I think is ripe for contributions would be to better articulate what the desired properties of an intelligence measure should be. Legg and Hutter offered a measure using Kolmogorov weights, but WHY is that any better than just randomly assigning any gibberish numbers to agents in any haphazard way--what axioms does it satisfy that one might want an intelligence measure to satisfy?


Thanks for the clarification.

Like I said, I only skimmed your paper, so I hope it was clear my comment was not intended as a criticism (or even as a review) :)

I think I agree with the general terms of your conclusion personally.


Yep its clear that the NFLT only apply if we consider all possible environments equally.

In practice, we are indeed not interested in every imaginable environments, only in "realistic" ones.

It was not clear for me if the paper addressed such concerns for AGI, e.g. when writing:

To achieve good rewards across the universe of all environments, such an AI would need to have (or appear to have) creativity (for those environments intended to reward creativity), pattern-matching skills (for those environments intended to reward pattern-matching), ability to adapt and learn (for those environments which do not explicitly advertise what things they are intended to reward, or whose goals change over time), etc.

But like I said, I only skimmed it.

In general (not talking about the paper there), I have the impression that this is something that may be missed (sometimes even by researchers working in the domain), and I agree very much to your point!

This is why I think the NFLT gives us an interesting theoretical insight here:

Making a "General" AI is not actually about creating an approach that is able to learn efficiently about any type of environment.


Yes - I think you're right that the actual interesting result from NFLT is not that 'optimisation is impossible', but that 'uniform priors are stupid'.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: