Hacker News new | past | comments | ask | show | jobs | submit login
Scaling False Peaks (oreilly.com)
23 points by 2517AD on Aug 7, 2022 | hide | past | favorite | 3 comments



Yet another "this won't get us to AGI" post that won't take a stand about how AGI should be defined. It also commits the other standard fallacies in these pieces:

- endless argument by analogy that makes no falsifiable claims about the actual problem

- comparing the training costs (energy and data) of an ML model to the operating costs of an individual human


"...but it’s not clear that Gato’s, GPT-3’s or any other contemporary architecture is necessarily the right vehicle to reach the destination."

"GPT-3, for example, developed its language model based on 45TB of text. Over a lifetime, a human reads and hears of the order of a billion words; a child is exposed to ten million or so before starting to talk."

What statements like this fail to consider is that the hardware the child's brain is made of is the result of millions of years of biological evolution, and evidently tailored to solve human-centric problems much more easily. But the child's brain is capable only of operating largely within the limits of its own subjective experiences (for ex: the experience of learning to speak a language). When we define AGI, it cannot be only under such anthropocentric terms. It needs to be able to solve a multitude of real-world problems with approaches that humans might never be able to readily comprehend. The solution space for these different kinds of tasks must be significantly high that there could be space enough for more than one way, i.e the "human way" to solve it. It's a narrow viewpoint to say that an AGI needs to learn in human terms.


>What statements like this fail to consider

What makes you think he failed to consider this? Or that this and the entire crux of your statement isn't inherent in his criticism?

>it cannot be only under such anthropocentric terms. It needs to be able to solve a multitude of real-world problems with approaches that humans might never be able to readily comprehend

Isn't that point, that being "at least" at the level of human terms of intelligence is orders of magnitude more advanced and efficient than anything currently available or on the horizon? The "human way" isn't even close to possible and yet you're saying AGI needs to be greater?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: