Hacker News new | past | comments | ask | show | jobs | submit login

It calls to mind a memorable exchange in the Star Trek: The Next Generation episode "The Measure of a Man". In that episode, Picard asks another character to prove that he, Picard, is sentient[0].

Likewise, I might ask how to prove that humans are intelligent (subtext: in ways that LLMs are not). The most obvious delta here, to me, is generalization. Humans appear to be able to reason over questions that are less similar to what they have seen before, compared to current LLMs.

E.g. making ChatGPT play chess was a bit of a joke meme for a while, since it routinely makes illegal moves. Your average human can be quickly taught to play chess well enough to mostly avoid that. ChatGPT can recite the rules of chess very accurately, but it seems unable to apply them to actual games. We've seen other projects that can learn the rules and strategy of games like chess, but those projects are (despite using neural nets under the hood) structurally pretty different from LLMs. They also generally have no capacity to describe the rules in human language.

Note that this is not a complete argument. There exist humans (in possession of intelligence) who cannot meaningfully learn to play chess either. E.g. my friend's 1-year-old child is obviously intelligent in a very literal sense, but does not have the patience to learn chess. I can't evaluate whether she is intelligent enough for that, because there is this other constraint preventing it. This is only a framework to think about this. If you think LLMs are capable of intelligence -- you should not be convinced by it in isolation.

[0]- It makes sense in context. He's trying to establish a satisfactory test for sentience.




Well it's clear to me that if we want to assert humans == intelligent in all cases, then the definition must be loose enough that no healthy human can fail it, even the dumbest ones.

The fact that we are even talking about this means that there is clearly a large scale of inteligence, from mosquitos dodging a bug zapper to Von Neumann making the rest of us look like drooling idiots. It's pretty clear that the average LLM is lower than the average human on this scale, but it may also be above the dumbest one. And that scares people because we won't improve further and LLMs definitely will. Or LMMs will anyway, LLMs by themselves are a bit of a dead end with only text.


Star Trek, while entertaining, cannot offer any philosophical insights. It was, after all, written with the intention to entertain, not to demonstrate or explore any deeper truths. It was also targeted at a particular subset of the population (that may be over-represented on HN), and likely will encode tropes that that sub-population will find compelling.


Is there a problem with this methodology for interrogating concepts like intelligence or sentience, or are you just dismissing it due to its source? Obviously Star Trek couldn't go very deep with it, but it doesn't seem to be an obviously bad starting point.


GPT-3.5 is pretty decent at chess now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: