AGI isn't all that impactful. Millions of them already walk the Earth.
Most human beings out there with general intelligence are pumping gas or digging ditches. Seems to me there is a big delusion among the tech elites that AGI would bring about a superhuman god rather than a ethically dubious, marginally less useful computer that can't properly follow instructions.
That's remarkably short-sighted. First of all, no, millions of them don't walk the earth - the "A" stands for artificial. And secondly, most of us mere humans don't have the ability to design a next generation that is exponentially smarter and more powerful than us. Obviously the first generation of AGI isn't going to brutally conquer the world overnight. As if that's what we were worried about.
If you've got evidence proving that an AGI will never be able to design a more powerful and competent successor, then please share it- it would help me sleep better, and my ulcers might get smaller.
Burden of proof is to show that AGI can do anything. Until then, the answer is "don't know."
FWIW, it's about 3 to 4 orders of magnitude difference between the human brain and the largest neural networks (as gauged by counting connections of synapses, the human brain is in the trillions while the largest neural networks are low billion)
So, what's the chance that all of the current technologies have a hard limit at less than one order of magnitude increase? What's the chance future technologies have a hard limit at two orders of magnitude increase?
Without knowing anything about those hard limits, it's like accelerating in a car from 0 to 60s in 5s. It does not imply that given 1000s you'll be going a million miles per hour. Faulty extrapolation.
It's currently just as irrational to believe that AGI will happen as it is to believe that AGI will never happen.
> Burden of proof is to show that AGI can do anything.
Yeah, if this were a courtroom or a philosophy class or debate hall. But when a bunch of tech nerds are discussing AGI among themselves, claims that true AGI wouldn't be any more powerful than humans very very much have a burden of proof. That's a shocking claim that I've honestly never heard before, and seems to fly in the face of intuition.
> claims that true AGI wouldn't be any more powerful than humans very very much have a burden of proof. That's a shocking claim that I've honestly never heard before, and seems to fly in the face of intuition.
The claim in question is really that AGI can even exist. The idea that it can exist, based on intuition, is a pre-science epistemology. In other words, without evidence, you have an irrational belief - the realm of faith.
Further, I've come to fully appreciate that without actually knowing the reasons or evidence for why certain beliefs are held, often we realize that our beliefs are not based on anything and could be (and possibly often are) wrong.
If we standing on just intuition there would be no quantum physics, no heliocentric galaxy, etc.. Intuition based truth is a barrier, not a gateway.
Which is all to say, the best known epistemology is science (assuming we agree that the level of advancement since the 1600s is largely down to the scientific method). Hopefully we can agree that 'science' is not applicable to just a courtroom or a philosophy class, it's general knowledge, truth.
Your framing also speaks to this. As if it is a binary. If you tell me AGI will exist, and I say "prove it". I'm not claiming that AGI will not exist. The third option is I don't know. I can _not_ believe that AGI will _not_ exist. I can at the same time _not_ believe that AGI will _exist_. The third answer is "I don't know, I have no knowledge or evidence" So, no shocking claim is being made on my part here AFAIK.
The internet for sure is a lot less entertaining when we demand evidence before accepting truth. Though, IMO it's a lot more interesting when we do so.
The difference isn't so much that you can do what a human can do. The difference is that you can - once you can do it at all - do it almost arbitrarily fast by upping the clock or running things in parallel and that changes the equation considerably, especially if you can get that kind of energy coupled into some kind of feedback loop.
For now the humans are winning on two dimensions: problem complexity and power consumption. It had better stay that way.
If you actually have a point to make you should make it. Of course I've actually noticed the actual performance of the 'actual' AI tools we are 'actually' using.
That's not what this is about. Performance is the one thing in computing that has fairly consistently gone up over time. If something is human equivalent today, or some appreciable fraction thereof - which it isn't, not yet, anyway - then you can place a pretty safe bet that in a couple of years it will be faster than that. Model efficiency is under constant development and in a roundabout way I'm pretty happy that it is as bad as it is because I do not think that our societies are ready to absorb the next blow against the structures that we've built. But it most likely will not stay that way because there are several Manhattan level projects under way to bring this about, it is our age's atomic bomb. The only difference is that with the atomic bomb we knew that it was possible, we just didn't know how small you could make one. Unfortunately it turned out to be that yes, you can make them and nicely packaged for delivery by missile, airplane or artillery.
If AGI is a possibility then we may well find it, quite possibly not on the basis of LLMs but it's close enough that lots of people treat it as though we're already there.
I think there are 2 interesting aspects: speed and scale.
To explain the scale: I am always fascinated by the way societies moved on when they scaled up (from tribes to cities, to nations,...). It's sort of obvious, but when we double the amount of people, we get to do more. With the internet we got to connect the whole globe but transmitting "information" is still not perfect.
I always think of ants and how they can build their houses with zero understanding of what they do. It just somehow works because there are so many of them. (I know, people are not ants).
In that way I agree with the original take that AGI or not: the world will change. People will get AI in their pocket. It might be more stupid than us (hopefully). But things will change, because of the scale. And because of how it helps to distribute "the information" better.
To your interesting aspect, you're missing the most important (IMHO): accuracy. All 3 are really quite important, missing any one of them and the other two are useless.
I'd also question how you know that ants have zero knowledge of what they do. At every turn, animals prove themselves to be smarter than we realize.
> And because of how it helps to distribute "the information" better.
This I find interesting because there is another side to the coin. Try for yourself, do a google image search for "baby owlfish".
Cute, aren't they? Well, turns out the results are not real. Being able to mass produce disinformation at scale changes the ballgame of information. There are now today a very large number of people that have a completely incorrect belief of what a baby owlfish looks like.
AI pumping bad info on the internet is something of the end of the information superhighway. It's no longer information when you can't tell what is true vs not.
> I'd also question how you know that ants have zero knowledge of what they do. At every turn, animals prove themselves to be smarter than we realize.
Sure, one can't know what they really think. But there are computer simulations showing that with simple rules for each individual, one can achieve "big things" (which are not possible to predict when looking only to an individual).
My point is merely, there is possibly interesting emergent behavior, even if LLMs are not AGI or anyhow close to human intelligence.
> To your interesting aspect, you're missing the most important (IMHO): accuracy. All 3 are really quite important, missing any one of them and the other two are useless.
Good point. Or I would add alignment in general. Even if accuracy is perfect, I will have a hard time relying completely on LLMs. I heard arguments like "people lie as well, people are not always right, would you trust a stranger, it's the same with LLMs!".
But I find this comparison silly:
1) People are not LLMs, they have natural motivation to contribute in a meaningful way to society (of course, there are exceptions). If for nothing else, they are motivated to not go to jail / lose job and friends. LLMs did not evolve this way. I assume they don't care if society likes them (or they probably somewhat do thanks to reinforcement learning).
2) Obviously again: the scale and speed, I am not able to write so much nonsense in a short time as LLMs.
Most human beings out there with general intelligence are pumping gas or digging ditches. Seems to me there is a big delusion among the tech elites that AGI would bring about a superhuman god rather than a ethically dubious, marginally less useful computer that can't properly follow instructions.