Hacker News new | past | comments | ask | show | jobs | submit login

One issue here is the term AGI never had a specific meaning and has become even more fuzzy in it's use.

Obviously, LLMs are a specific type of AI that do not emulate many of the characteristics and capabilities of animals like humans.

However, they absolutely do have general-purpose applications and the leading edge ones can reason, although in some ways not quite as robustly as some humans at this time.

But one core aspect of "AGI" used to be distinguishing general purpose versus narrow AI. GPT-4 clearly has general purpose utility.

So part of what has happened has just been goal post moving. The other aspect of it is a failure to recognize that there are a variety of cognitive abilities or characteristics that go into animal (human) cognition rather than just one thing.

LLMs simulate some of the language abilities but not other things like stream of subjective experience tied to high bandwidth sensory experience, emotions, certain types of very fast adaptation, properties of life, self-preservation and reproduction, advanced spatial and motion systems, etc. There are a number of cognitive characteristics that animals and/or humans have which are entirely missing from LLMs.

However, again, it's clear that LLMs, multimodal LLMs, and similar systems, do have general purpose application in a way that many may have previously assumed only a more complete emulation of a human would require.

Within a decade or two, when we have hyperspeed superintelligent AI, we may find out the hard way (and maybe too late) that not having all of those animal characteristics in AI systems was a _good_ thing.




We need some kind of set theory for describing both human and AI capabilities so we can classify their differences as time goes on.

For example if we say

( AI != Human ) it's both obvious and useless as saying ( John != Bob ). If we were comparing John and Bob it would be easy for most people to see the differences, we would use our internal understanding of being a human and make a list of differences. Bob is tall. John is athletic. Bob is good at math. etc. How do we do this with AI, and will producers of AI perform valid tests and publically release them?

Defining in a more scientific method of the differences between human and AI should at least help us some as these thing progress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: