Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What marks the hard line for "intelligent"? It's largely philosophical. Is a human? Dolphin? Mouse? Cockroach? Tree?


Abstraction and reflection have been posited in the past as pre-requisties for intelligence. "I am the thinker that is thinking." How do we prove whether an AI has this capability or not? I'd say it's nearly impossible.

However, I think we can certainly prove when it doesn't. For instance, the fact we need an external plugin to get the model itself to return text claiming 1+1=2, tells me that GPT4 cannot reason about numbers in the abstract, and therefore lacks abstraction ability.


That's a rather interesting line to me, as someone with a young child, they cannot perform that abstract reasoning on mathematics either. At the same time, I feel extremely confident they're a thinking and intelligent being.

I think we're so strongly biased against a deeply uncomfortable reality that there may not be a hard line, we don't even want to consider the alternative.


That heuristics also rules out every animal that doesn't pass the mirror test, and I'm pretty sure rats and dogs are thinking and intelligent.


Models have trouble with discrete spaces often, partly because of the internal continuous space representations, but also in this context because the transition from the probabilities in natural language to mathematics may not be as stark as it should be.

But to make matters worse, or more muddied, 1+1=1 can be a valid mathematical statement. It simply depends upon the set and group you have, or if you're doing modular arithmetic, etc. Sometimes you're given a unital magma. So, there's still a heavy dependency on context for the problem setup, but the underlying discrete and deterministic rules that are applied to the context is less malleable than other context switches in NLP LLMs do well in (such as language styling).


The inability to fully define a thing doesn't invalidate all attempts to set its outline. At least, it's easy to conclude that an intelligent being has to be able to reliably perform basic reasoning (given all the necessary information is properly acquired). The current GPT models all fail at this, and neither the token length nor the network size can fix this.


What do you mean by "reliably perform basic reasoning"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: