
Tell HN: AGI Will Be Underwhelming - browsergap
When it happens, I think we&#x27;ll get used to it pretty fast and we&#x27;ll be like, &quot;Oh, that&#x27;s all it is.&quot;<p>Then everyone can order groceries and plan trips by voice conversation with a smart assistant that actually knows us.<p>And it will become productized and have tiers and specialities but AGI is not going to be any more &quot;OMG my life is changed foreveaah&quot; type watershed than, say, the iPhone.<p>Don&#x27;t. Believe. The. Hype.<p>That&#x27;s my take. What do you thinK?
======
muzani
I believe A(G)I will be like a really dumb human with about 10 thousand years
of experience in a certain specialty. It's hard to predict how things would
change, but imagine security guards who have memorized every face in the
country.

However, don't overlook that the iPhone did change things forever. The
difference in lifestyle between 2020 and 2000 is more than 1950 to 1990. Your
life might not be different, but it makes a difference to the factory worker
who had no savings, but is now an Uber driver who can freely work 16 hours one
day and sleep the next day.

We'll likely move a "class" upwards, in that humans will no longer be doing
brute labors and driving trucks, but still be scheduling truck routes and
telling the AI what to do. Humans will probably be more involved in educating
and training, maybe even disciplining stray AI.

I also don't believe we can achieve proper AGI without inventing some kind of
emotional state. Maybe AI will develop crushes on their trainer, not so much
for sex, but out of gratitude and for survival. Or there could be anger,
ambition, envy, which might simply stem from the desire to learn and
experiment. Maybe AI may become sweet and manipulative as it predicts patterns
in how humans respond. There will also likely be some equivalent of dopamine,
and AI could be addicted to staying above 80% charge and so on.

------
keiferski
I’d say we are at the beginning of a lot of technological trends, trends that
will span centuries, and we don’t quite understand how they will play out.

AGI probably won’t get _really_ interesting until 50-100-150 years from now.
Ditto for stuff like space travel, self-driving cars or social media.

I’d also question your assertion that the iPhone wasn’t a revolutionary
change; we simply haven’t understood the full implications of mobile computing
and mobile access to a global network. Remember that the Internet has only
been around in a consumer form for roughly 20-25 years. This is an incredibly
short period of time, compared to say, the printing press or the steam engine,
and we’ve already seen society completely transformed.

~~~
browsergap
iPhone is revolutionary, but it's productized revolution, which feels
smoother. It's not "the singularity"

------
michaelmrose
The first thing that comes to mind is a question. If we made something as
smart as you why would you be legally allowed to use it as a slave?

Next the logical argument is that technology doesn't stand still. If we start
with a human level AI we can gain increased runtime simply by throwing more
compute at the problem. There is no particular reason we have to run it in
real time why not 10x or 100x or 1000x.

What would you produce in your field if we gave you a century or a millennium
between now and next year?

This leaves the last and perhaps obvious point that an AI that can improve
itself can use these centuries of run time to work on such improvements.

If you have human level AI tomorrow you don't have merely human level AI for
long and once something surpasses you, you are poorly positioned to predict or
control it. This is not to say it would inherently be malicious but does your
pet cat get much of a vote in the running of your day to day life?

~~~
browsergap
I think this is wrong (not to say you are wrong but this idea which is very
common).

The idea that we develop AGI and it is this program that is what we run on a
regular device, and then we can make it 10x smarter effectively by running it
on 10 devices or 10x as fast. I think more likely will be AGI will be achieved
first on the biggest teraflop supercomputers that we have, and it will for a
long time be the app that takes a lot to run. And probably the first AGI will
not be quite as smart as a human, but basically we will have no other
reference point for what it is as smart as so we will call it that.

Also, I think there will be some sort of non-linearity effects that mean that
you can't just "scale up" intelligence by adding more processors. It will work
to a point, but then the curve flattens. Consider that the global total IQ is
already approx 800 billion, but our planet is still pretty dumb. I mean this
to also apply to scaling a "single" AGI up in speed. Linear speed gains will
have diminishing returns I think.

Also, I speak about the productization and allocation of it. It will not be
this "come one come all" "gather round" everyone can partake sort of thing. It
will be a product, like night vision or GPS, and the secret government
military uses will get the best quality, and the rest of us will get smarter
shopping.

Further, if it really is linearly scalable, then it certainly will be
controlled. It will be more controlled than enriched uranium in that case, and
even if not so scalable is still going to be very controlled if it is at all
transformative.

I think the various technological, political and commercial realities will
distinctly flatten/soften/smooth the predicted "singularity" discontinuity
blast wave into a humdrum speed bump that appears to most of humanity as a
better iPhone (basically).

This is pure speculation. We shall see.

~~~
michaelmrose
I didn't suppose that it would run on your laptop. What is possible on top end
supercomputers scales up as well as technology improves.

In regards to scaling why can't we use more compute to run the a single ai
faster and faster instead of simulating more AI especially if its running on a
supercomputer with a high bandwidth communication between nodes?

If between point A and point B you have 100 times the compute why don't you
effectively simulate a century of thought for your single AI instead of
simulating 100 AIs.

~~~
browsergap
I think there will be some limit that means it's not about speed. I think we
can't just scale up the speed factor (even if we reach speeds that are fast
enough for this in the first place).

Maybe it's related to experience/embodiment. What's a 100 years of thought if
you don't have the experience, OODA feedback loop to inform it?

But maybe that's not it, and it's related to something else. I just think the
simplistic idea that "once we have it, we only need to make it faster" will
not work for some reason. I don't think creating ASI will be that easy. It's
basically creating a god. I think if you sped up how an average human thinks,
they don't become a god.

Consider psychological trauma and issues. Something that happened 20, 30, 40
years ago, people are still obsessed with and scarred by today. That's not
adaptive, and in many ways, that's highly stupid. But it's so common in
intelligent humans. If we could have compressed that 40 years of thought (and
even experience) to 1 second, they still have made no progress wrt that
factor. It's not just about speed, it's about the quality or nature of
thought.

But I'm not saying that analogy explains it. I just think there will be some
reason why simply adding speed will not some spectacular revolution make.

Another, but still too glib way of saying it is, say we create an AI
equivalent to a human. Still humans are pretty dumb, all things considered.
Say we speed it up. Now we just have a AI that's more quick to be dumb than a
regular human. Do you know what I'm saying?

------
mrfusion
I wonder if gpt-3 is our very first instance of AGI and it’s obviously very
bad and barely passable. But it meets some definitions. Can learn some new
tasks without retraining. Applies common sense to some questions. And can do
some arithmetic.

~~~
browsergap
I think it's possible. Certainly when they draw the timeline in history
textbooks in future, GPT-3 could be somewhere at the start, like one of the
"ape men" to human transition species. :)

------
davidivadavid
The usual narrative is that AGI => singularity, in which case it's hard to
imagine it's going to be underwhelming.

~~~
muzani
It's more like AGI => ASI => singularity.

That's sort of like saying a Dyson sphere will make nuclear power obsolete.

------
sh461
<wild speculation>

AGI is not the same as human intelligence. It has the generality of human
intelligence but since it isn't restricted by biology it can scale up much
easier and can achieve superhuman performance in pretty much any individual
task, group of tasks, or entire scientific or technological fields. That's
pretty exciting.

</wild speculation>

<reality>

It's questionable whether the above is possible at all. In all likelihood none
of us will see anything even remotely close to this in our lifetimes. We're
currently so far away from it that we don't even know how to get started on
solving such a problem. Nobody is currently working on this, despite how
they're advertising their work.

</reality>

I guess what I'm saying isn't that AGI will be underwhelming, it's that it
won't exist at all, at least as far as we are concerned.

------
askafriend
The iPhone changed life as I knew it, forever.

But a lot of people dismiss it as just a smartphone.

I guess it depends on your perspective.

~~~
browsergap
Yes, but it's not "the singularity".

Saying AGI will be like iPhone and therefore underwhelming, is not to demean
iPhone. But to place in correct perspective the overinflated ( I think ) sense
of self importance and impact of so-called AGI.

It's not underwhelming wrt an average week in Techcrunch, but it's
underwhelming wrt to the mythos and delusion that surrounds AGI.

Underwhelming does depend on perspective. But my perspective is not to demean
iPhone, just to bring AGI big heads back down to earth.

------
robodale
Adjusted Gross Income?

~~~
atsaloli
Artificial General Intelligence

