Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rather than reframing intelligence itself, wouldn’t Occam’s Razor suggest instead that this isn’t intelligence at all?




IMO Occam's Razor suggests that this is exactly what intelligence is.

The ability to compress information, specifically run it through a simple rule that allows you to predict some future state.

The rules are simple but finding them is hard. The ability to find those rules, compress information, and thus predict the future efficiently is the very essence of intelligence.


I don't really think that is what Occam’s Razor is about. The Razor says the simplest answer is most likely the best, but we already know that intelligence is very complex so the simplest answer to intelligence is still going to be a massively complex solution.

In some ways this answer does fit Occam's Razor by saying the simplicity is simply scale, not complex algorithms.


> Intelligence isn't about memorising information—it's about finding elegant patterns that explain complex phenomena. Scale provides the computational space needed for this search, not storage for complicated solutions.

I think the word finding is overloaded, here. Are we "discovering," "deriving," "deducing," or simple "looking up" these patterns?

If "finding" can be implemented via a multi-page tour—ie deterministic choose-your-own-adventure—of a three-ring-binder (which is, essentially, how inference operates) then we're back at Searle's Chinese Room, and no intelligence is operative at runtime.

On the other hand, if the satisfaction of "finding" necessitates the creative synthesis of novel records pertaining to—if not outright modeling—external phenomena, ie "finding" a proof, then arguably it's not happening at training time, either.

How many novel proofs have LLMs found?


Even simpler: intelligence is the art of simplifying. LLMs can fool us if they reduce a book into one wise-looking statement, but remove the deceptive medium - our language - and tell it to reduce a vast dataset of points into one formula, and LLMs will show how much intelligence they truly have.

Unless you can provide a definition for intelligence which is internally consistent and does not exclude things are obviously intelligent or include things which are obviously not intelligent, the only thing occam's razor suggests is that the basis for solving novel problems is the ability to pattern match combined with a lot of background knowledge.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: