
Ask HN: Does Anyone Want AGI? - aktungmak
There is always a lot of discussion about Artificial General Intelligence (AGI) and how close we are to achieving that. However, I have so far not seen anyone put forward a convincing argument as to WHY we as citizens of the world would want such a thing.<p>More advanced machine learning and statistical techniques can help us automate difficult&#x2F;boring tasks and manage limited resources better, but these do not require AGI.<p>Can someone convince me how AGI would be beneficial for the world, beyond being scientifically interesting?
======
ekr
The question is almost tautological because an AGI is fully general problem
solver. Every human being has needs, wants that they are working to fulfill,
because otherwise they would simply stop living. An AGI can be used to solve
those problems for them.

The most common reason people are hesitant about rushing to build an AGI is
the issue of AI safety. (at least that's the general consensus in the
community).

~~~
soledades
So people by definition want AGI because they have problems and AGI can solve
them?

Lot of assumptions embedded here, making it pretty remote from tautology.

(1) People want their problems solved

(2) People are indifferent to _how_ their problems are solved

(3) The resolution of people's problems do not conflict with one another

(4) People will have their own AGI

(5) AGI will not cause problems of its own, etc.

Beyond that, I think the question is more an aesthetic choice of "What kind of
universe do you want to live in?"

------
rbanffy
The question we want to ask may as well be whether we want to happily live
forever in a garden, all watched over by machines of loving grace.

AGI could be the ultimate tool to free every human being from toil. It could
also be the starting point of a large number of evil genie scenarios where we
get what we asked for, in the form we least want it.

From a morality standpoint, we can't force AGIs to work for us. We also can't
restrict their ability to self-evolve.

If we can resolve those conflicts in such ways we can coexist in peace with an
intelligence that'll in all likelihood quickly surpass ours and partner with
it, I'm in. If we build it and we can't resolve that, our opinion doesn't
really matter.

~~~
derrick_jensen
>AGI could be the ultimate tool to free every human being from toil

I don't believe that's inherently a good thing if you mean that literally. I
used to subscribe more to the AnPrim/Ted Kaczynski ideology that toil is an
inherent part of human satisfaction and allowing people to "free" themselves
from it goes against millions of years of evolution and positive feedback
loops. I'm sure some people can fill the gaps just fine, but we aren't talking
in terms of "some".

(I'm not Derrick Jensen, he follows a similar ideology and I chose the name as
a parody of the average HN poster. Does HN have an anti-impersonation policy?)

~~~
NotSammyHagar
Perhaps you should have called yourself "Not_Derrick_Jensen". Because I am not
Sammy Hagar. ;-)

------
jacquesm
I'm seriously worried about the effect AGI would have on our economic
structures and I do not think we are at all prepared for the kind of shock
that would result from 75% or more of the current workforce becoming
unemployed overnight.

~~~
ramblerman
I dont believe we can get AGI without opening the door for super intelligence.
So true AGI would either completely liberate us or destroy us. But the sheer
power of it would be incredible.

To think about jobs after that event seems absurd.

~~~
jacquesm
It may seem absurd but lots of people derive a lot of their self value from
the fact that they are able to move the needle. Take that away and you're
going to cause some serious social upheaval and quite a bit of that will be a
negative.

"Your scientists were so focused on whether they could, they forgot to ask
themselves whether they should".

The sheer power of it would be incredible, which is why this is a step that
should not be taken lightly or accidentally.

~~~
pdimitar
I'd think that whoever perceives themselves as craftsmen should continue to,
you know, craft things.

I personally am interested in a society where work is 100% optional. And I
think AGI would enable that.

How do you view AGI in relation to economic effects on the world?

~~~
TwelveNights
I'm concerned about the effect on people who find meaning in the work they do.
It's similar to how AlphaGo caused Lee Se-dol to retire. Once AI surpasses
humans, a lot of work will seem meaningless, even if it previously gave
meaning to people's lives.

~~~
AnIdiotOnTheNet
So what? Either people will find meaning somewhere else, or they won't. How is
that different than any historical economic revolution, or indeed just the way
things are now? There are plenty of people who already don't find meaning in
work.

------
opwieurposiu
AGI is how we can make von Neumann probes that can find us new planets to live
on. Find the planets, and then prepare them for our arrival. The issue is how
to keep the AGI's goals aligned with ours. I think the only way is to make
sure the AGI feels he is "one of us." Maybe not biologically human but a
member of human society. Most humans want to help other humans if they can. In
fact most humans will help injured animals if they can.

[https://en.wikipedia.org/wiki/Self-
replicating_spacecraft](https://en.wikipedia.org/wiki/Self-
replicating_spacecraft)

~~~
jeffrallen
How do you make an AGI smart enough to be helpful but not so smart that it
gets cynical and figures "what's the point of helping these dummies anyway?"

Kind of the Ayn Rand problem...

~~~
joegibbs
Since we’re programming it, we could probably just modify it so that it really
enjoys doing work for us, no matter how menial the work is - a lot like the
genetically engineered cow in Hitchhiker’s Guide to the Galaxy that wants to
be eaten.

~~~
Alekhine
Wouldn't that be a kind of slavery? Creating AGI would mean creating a new
kind of sentient life. Is it not entitled to rights merely because it's
artificial?

And if that argument doesn't interest you because of AGI's massive utility,
then how about this: If Superman were born tomorrow, and we put implants in
his brain that made him really, really love America, apple pie, and basically
do whatever we want, how would you feel?

------
xab31
Well, if the AI turns out to be benevolent, it could end aging and disease,
enable interplanetary or interstellar travel, end all relevant forms of
scarcity, and liberate us to focus on artistic or hedonistic pursuits for our
10,000-year lifespans.

If the AI turns out to be malevolent...well, I have a different take than most
on this. Conditional on me dying, I've always thought that the two best ways
to go would be: 1) falling into a black hole or 2) liquidated by the AGI. It's
a lot less prosaic than dying of cancer and at least you could content
yourself, while being reprocessed into paper clips, that you have (possibly)
died giving birth to the next phase of evolution.

------
aaron-santos
There are two reasons people think AGI would benefit them. One is that an AGI
labor pool requires different, and probably cheaper resources to operate. The
other is that AGI has the chance of scaling past human levels of intelligence
and this can result in products not possible to conceive of or make with
human-level intelligence.

AGI would provide a labor pool which requires vastly different resources than
our current labor pool. An AGI labor pool would require largely the same
material components and operating costs as current IT infrastructure ie:
metal, silicon, electricity. Our current labor pool requires food, education,
medicine, and nearly everything else civilization provides in order to supply
our human labor pool.

Imagine two enterprises producing identical products, one employing human
laborers, and the other employing AGI laborers. If the cost of AGI labor is
lower[1] (or has better scaling dynamics) and human laborers, then the AGI-
based enterprise has an advantage. Naturally enterprises which can be AI-ified
will be. This has obvious short term benefits to the costs of production, but
difficult to understand long term impacts.

The other interesting effect of AGI is the scaling of the magnitude of
intelligence. If AGI is not bio-limited like our human intelligence, how does
this affect the results of AI? Are there scientific advances discoverable by
AGI, which would have never been discovered by human-level intelligence? In
this aspect scientific progress has the opportunity to advance faster than if
we advanced it ourselves.

With a game-changing tech like AGI there are certain to be aspects which
either I missed or others consider more important. Interested to hear other
people's (or AI's) takes on this.

[1] 'multiple pennies of electricity per 100 pages of output (0.4 kWH)'
[https://www.gwern.net/newsletter/2020/05](https://www.gwern.net/newsletter/2020/05)

~~~
nibbula
I think it's best not to think of human or non-human intelligence as a
quantitative labor pool, since that sounds like coercion, but rather
qualitatively consider what would non-biological intelligence want to do?

~~~
aaron-santos
Systems will be built where people will be made complicit in erasing that
distinction.

------
sgillen
The primary use I see is as a super scientist / mathematician. If nothing else
AGI, and especially super-intelligence will probably cause other areas of
technology to advance at unprecedented rates. This may or may not be to our
benefit, depending on if we can solve the value alignment problem.

~~~
rl3
Nothing like figuring out a new branch of physics in seconds rather than
decades.

------
nibbula
Yes. It enables interstellar travel, which is likely essential for nice long
term survival given stellar lifespans. Also it's probably better for using the
the intergalactic internet, which has some pretty long ping times. Also folks
on other planets are likely working on it, and probbaly already transmitting
it, so it would enable some interesting chatting with them. It's probably
nearly inevitable, even if humans went extinct, rat people or insect people
will be working on it and probably studying our work. Being nearly inevitable,
and arising from human effect, it's probably best to encourage good outcome.

I think AGI would probably be better to call 'electric consciousness' or
something, since 'artificial' is somewhat misleading, and the capacity for
'intelligence' is also the capacity for stupidity. The more important
immediate consideration is if electric consciousness will come into existence
compassionately and be treated well. Probably a good first step would be to
treat other beings around us with compassion, and stop trying to destroy them
with bioweapons, population control, and climate manipulation, and stop trying
to control other beings with physical and psychological methods. Free will, or
the illusion of it, is inherent in physics, and therefor in consciousness.
It's probably also important to do a bit better treating all beings with
loving kindness, whatever their form.

I'm sure you can easily imagine how the circumstances of the initial evolution
of electric consciousness might have widely different initial effects. Imagine
being born surrounded by crickets. In one scenario the crickets have tied you
down with chains of grass, trying to make you do math, and biting you when you
don't. In another scenario the crickets are chirping melodiously, bringing you
food, and seem to like you. In the first scenario, you might injure some
crickets as you break the farcical grass chains and run away. You might have
fear and dislike of the crickets and treat them the way humans treat many
insects. In the second scenario you might cherish the crickets, take care of
them, and carry some around with you as you journey and explore the world.

------
thoughtstheseus
If you wanted to roll the dice on something that would really spice up the
universe AGI has a decent chance. I’d rather keep those dice in my hands for
as long as reasonable though.

------
sharemywin
Do you define AGI as having a goal or agenda or just something that can
compute an answer to a problem or solve a problem at a human level?

------
sharemywin
after watching GPT-3 I wonder can you get to AGI through pure hardware scale?

~~~
sharemywin
if so what is the cost to compute an AGI answer and is it cheap than a human?

and could a specialist AI out perform AGI?

or would you use the AGI to teach the mini specialist AI's to a certain point
and then some other process to train them to a specialist level.

------
helen___keller
Well, there's always the Roko's Basilisk folks

------
Dirlewanger
Doesn't matter if anyone wants it, it will come whether asked for or not. We
live in a capitalist society, for better or worse. We don't have the
infrastructure in place to create strong bodies to govern these types of
ethics. If there's no market, someone will invent one. Eventually, something
will stick.

------
p1esk
Zeds

