
Experts who say we shouldn't worry about superintelligent AI are wrong? - headalgorithm
https://spectrum.ieee.org/computing/software/many-experts-say-we-shouldnt-worry-about-superintelligent-ai-theyre-wrong
======
jandrese
I've really gone sour on the singularity in recent years. It seems to be yet
another one of those cases where someone projects out from incomplete data and
assumes unbounded infinite growth despite the fact that such a thing never
happens in nature. Any time you see someone project out in a simple
exponential growth curve you know their projection is bullshit. Growth curves
are always S-Curves. Always.

It's like asking how many more flies there would be in the world if you failed
to squish one back in the 80s. You calculate out the lifespan and size of the
brood and discover that the entire solar system would be filled with housefly
if you had let that one live. In truth the number of flies is bounded by food
and water availability.

Artificial Intelligences are almost certainly going to run into the same
limitations that prevent natural intelligences from becoming godlike. It's
hard to quantify because our measures of intelligence are so vague, but from
what I've seen of AI research thus far it will be a herculean effort to get
something that's as smart as an average human and pushing beyond that is going
to run into some serious fundamental limits in power density, cooling, quantum
tunneling leakage, and so on.

~~~
bko
What I never understood is how you would go from:

Hey, I found this series of numbers and if I take some numerical
representation of an image/text/sound and I multiply/add/perform non-linear
function them through these series of numbers it will map to a number which I
can interpret as something meaningful to me

to

This thing is alive and will kill us all in maximization of its utility
function.

~~~
ggreer
Most experts don't think this is likely, just that the potential consequences
are bad enough that it's worth making it less likely. It's just like how most
people don't think global nuclear war is likely, but it's still worth reducing
the likelihood of.

Also, the people who worry about this aren't concerned about current ML stuff
going haywire. They're worried that we're one or two algorithmic breakthroughs
from something that can improve itself. If the upper bound for what sort of
intelligence is possible is much higher than us, we could quickly be
outclassed. As Nick Bostrom says:

> Far from being the smartest possible biological species, we are probably
> better thought of as the stupidest possible biological species capable of
> starting a technological civilization—a niche we filled because we got there
> first, not because we are in any sense optimally adapted to it.

If you want to delve into the best version of the AI risk argument, I
recommend _Superintelligence_ by Nick Bostrom.

~~~
bko
They're... numbers...

So you're telling me we could come across a series of numbers, that would
somehow imbue the computer storing these numbers into sentience. And having
these numbers in such a sequence could pose a existential threat to humans.

The people worried about this are having an entirely different conversation
devoid of what practitioners call AI.

I've written a blog post on my thoughts of Bolstrom and other AI alarmists:

[https://medium.com/ml-everything/ai-optimists-vs-
pessimists-...](https://medium.com/ml-everything/ai-optimists-vs-pessimists-
and-why-the-singularity-isnt-near-5d3a614dbd45)

~~~
ALittleLight
You don't need sentience to be a threat. Just intelligence.

Imagine a super intelligent machine without sentience that just answers
questions. It proposes experiments. Takes the results. Tells you things about
the world. Gives you advanced technology, robots, fusion, nano tech, whatever.

Now imagine that the super intelligence was owned and operated by someone you
don't like. Authoritarians in China, Silicon Valley oligarchs, whoever. The
only sentience involved is human, no rebellion, no terminators, and still not
a very good result.

~~~
wolco
You've described google or probably more askjeeves's eternal promise never
realized.

~~~
aeternum
Never realized? You very much underestimate the affect google has had on the
world. Imagine how different the world would look if Google and other search
engines required a 10k / month subscription to gain access.

------
geophile
I don't think the problem is super-intelligent AI that we _can 't_ turn off
(because they are so intelligent that they block our efforts). There is a more
insidious problem that is closer: Merely intelligent AI that we _don 't want_
to turn off.

We are becoming more and more reliant on AI in situations that formerly
required human judgement. And we like the systems that rely on such AI. We
like them so much that these systems become very popular, used by millions and
even billions of people. Scalability _demands_ AI solutions. What if we don't
like what the AI is doing? Do we turn off that system? Do we disable the AI
and rely on human judgement again? (Where would we get all the employees?) Do
we tweak the AI? That last option seems like the most palatable one, but each
time we tweak the AI, we are subjecting ourselves to new unforeseen
consequences. It's like the genie gives us three wishes, we get through them,
disappointed each time, and then he gives us more wishes. And all we can do is
not repeat our previous mistakes, while we make new ones.

To make this concrete: Imagine Facebook subjected again to Russian influence
of US elections. Suppose Facebook actually does get serious about reigning in
this influence. They deploy AI to do so. First of all, it's an evolutionary
arms race between the AI and the Russian influencers. Second, we really do
have to worry about the AI producing bad results.

~~~
SolaceQuantum
I feel the situation is generally the opposite. We don't like systems that
rely on such AI. In fact, there aren't systems that rely on such AI in
general- any that claim there are are in fact tens of thousands of human
contractors sacrificing their psychological health to train an AI that still
kinda sucks at deciding when to block graphic content on facebook.

~~~
geophile
I agree. But the discussion is about where AI is heading. And AI undeniably
solves some scaling problems. For example, voice recognition for iPhones could
have been done by a large horde of people. But AI does a really good job of it
now. AI opporunities are likely to grow over time.

------
cf
The real risk is that worrying about fantastical disaster scenarios distracts
us from addressing the more immediate problems we already face with AI.
Whether it is facial recognition being used by China to aid in ethnic
cleansing of a minority group, or Tesla Autopilot regularly killing its
passengers.

Even if you don't like my particular examples I can highlight a dozen more
problems AI software has created or exacerbated right now. Why all the focus
on hypothetical problems?

~~~
simonh
Oh good grief, not another we should do this before we do that post. As if the
entire human species is incapable of walking and also chewing gum
simultaneously.

Let’s stop Nick Bostrom and a load of AI experts from doing their current
work, put them all on a plane and send them to China to solve political
oppression. I’m sure that will work.

~~~
dlkf
> As if the entire human species is incapable of walking and also chewing gum
> simultaneously

Do you believe that attention is an infinite resource?

~~~
simonh
No I don't, but 6 billion people is a lot of attention to spread around. Why?

------
paxys
All of these geniuses warning about AI taking over the world need to stop
reading sci-fi and actually read about AI.

~~~
joelx
Read Superintelligence... An excellent book that reads like a study or a
philosophy text on AI. I think there's a clear and present danger of human
extinction.

~~~
johnfactorial
I will read it, sounds very interesting. But the Wikipedia entry for the book
says the book "argues that _if_ machine brains surpass human brains in general
intelligence, then this new superintelligence could replace humans as the
dominant lifeform on Earth."

That is a humongous "if". Can you provide something from the book (or
otherwise) that provides compelling evidence that "machine brains" will be
invented? So far I've seen no evidence that mathematical models and/or Turing
machines can be used to replicate a mind.

~~~
lern_too_spel
A computer can simulate. What part of a mind contains unsimulatable magic?

------
dlkf
If you work in software development, you've probably witnessed many instances
of smart people spending all their time and energy on _invented_ problems that
are mathematically or philosophically interesting, rather than _actual_
problems whose solutions would provide customer value. Catastrophizing about
AGI is basically the same phenomenon on a macro scale.

------
jp555
The Singularity = Nerd Rapture 1.0

Super-intelligent AI = Nerd Rapture 2.0

I'm a lot less worried about Artificial Intelligence than I am about
Artificial _Stupidity_.

~~~
konz
I for one am a lot more worried about _Natural_ Stupidity than Artificial
Stupidity.

------
jccooper
I'll start worrying about super-intelligent AI when I see any sort of general
intelligence. We're still very far away from that.

Currently the biggest danger from AI is from applying it where it's not as
smart as you think it is.

------
stakhanov
The real danger around AI lies not in what it will or will not do, but in what
people _think_ it can do when it clearly cannot, particularly the pointy-
haired-boss variety of people. Just think about automated essay grading in GRE
examinations or this recent story about Unilever using video-based pattern
processing to screen job applicants [1].

[1] [https://www.telegraph.co.uk/news/2019/09/27/ai-facial-
recogn...](https://www.telegraph.co.uk/news/2019/09/27/ai-facial-recognition-
used-first-time-job-interviews-uk-find/)

------
marcus_holmes
Is it just me, or is the author determined to be scared by giant robots?

The whole section about "we can always turn the power off" was refuted by "ah,
but it will already have thought of that!". Um.. no.. we don't have telepathic
power switches. Unless we give the AI the capability to control the power, it
can't control the power. Just because it's hyper-intelligent doesn't mean it
automatically assumes control of the mains switch. That's bad Hollywood
scripting you're thinking of.

I know there's money in making people scared of stuff, but this is pretty
blatant.

~~~
emiliobumachar
A tiger may be convinced that the human it got cornered has no recourse, up
until when the human pulls the trigger.

~~~
marcus_holmes
sorry, but what has that got to do with AI?

~~~
emiliobumachar
It's an analogy. A human may be convinced that the superintelligent AI whose
power supply they control has no recourse, up until when the AI does something
beyond the human's intelligence.

~~~
marcus_holmes
This is exactly what I'm talking about.

An AI is a machine designed and built by humans for a specific purpose. It is
given specific capabilities during its design.

If those capabilities include controlling the power switch, it can control the
power switch. If they don't, it can't, and can never.

The machine is designed to do a particular job. If designed well, it will be
incredibly good at its job. If designed to teach itself how to do that job, it
may eventually surpass human capabilities at that job. In no way does that
stop it from being a single-purpose machine, or allow it to suddenly become
good at any other job.

Let's try another analogy: your juicer cannot get so good at juicing that it
becomes capable of deep-frying.

We infer wider capabilites to "intelligent" machines because we
anthropomorphise them, assuming that they must be like us because they mimic
aspects of our behaviour. But they're not like us. They're just machines.
Designed to perform specific tasks. Anything else is Hollywood.

~~~
emiliobumachar
There was a famous case of a narrow purpose AI which was supposed to learn to
identify pictures of tanks, it learned to identify pictures of cloudy days
instead.

In another case, an AI designed to become an oscillator, became an amplifier
instead, amplifying the 60Hz noise from the electrical grid.

Kinda equivalent to a deep-frying juicer, isn't it?

Both cases, and others like them, are quite well understood in hindsight, but
were not predicted beforehand by the people involved.

------
colorincorrect
Might just be BSing, but the stock market is technically already a
superintelligent agent with the implicit goal of investing funds in profitable
ventures. The question is: should we fear the stock market?

~~~
thelazydogsback
>should we fear the stock market

Yes

~~~
AnimalMuppet
Perhaps, but _why_? Because it's too smart? Or because it's too stupid?

~~~
colorincorrect
probably because its "utility function" (if it has one to begin with) is not
aligned with human utility. a major part of MIRI's research program is set out
on the question of this "alignment problem".

Yudkowsky has an amazing talk on this issue. I don't recommend many talks but
this one is very interesting and introductory:
[https://www.youtube.com/watch?v=EUjc1WuyPT8](https://www.youtube.com/watch?v=EUjc1WuyPT8)

------
shawnb576
Lost me at first sentence:

“AI research is making great strides toward its long-term goal of human-level
or superhuman intelligent machines.”

Hmmm, really? What examples? Show me a machine that can do what a gnat does
and we’ll talk.

~~~
blueadept111
Gnat? That would be impressive. How about a single cell. Or even a single
protein. Can't even get it to fold right in simulations.

------
jmull
This seems like nonsense.

The author simply invents futures to refute various arguments (all presented
out-of-conext, so who knows if there's any real relevancy from that direction
either).

What we might do about a super-inteligent AI would strongly depend on what
that is, exactly, which, of course, no one knows or really has any idea.

So it's OK to casually BS about this stuff, but in terms of substantive
discussion, much less action? No.

------
andrewla
The main assumption amongst Superintelligent AI-phobes is that the transition
will happen quickly, but that is an assumption that is extremely questionable.

Why wouldn't we go through phases of increasing intelligence (along multiple
dimensions) while working towards an intelligent AI? Yes, someday we may
produce an AI that is leaps and bounds more intelligent than humans, but
before that we will have experience dealing with various human-level and sub-
human-level computer intelligences. The idea that AIs will be self-improving
is kind of ridiculous -- there are lots of intelligent people, and the idea
that an intelligent AI will happen to be good at designing AIs is negligibly
small, much less that they will have some kind of immediate breakthrough that
enables a higher level of intelligence.

It's not obvious that the path from moderate-intelligent AI to super-
intelligent AI is a matter of "just adding more memory and compute";
intelligence as we know it relies too much on associative memory to think that
a simple capacity increase will elevate beyond a certain point. If the
intelligence takes a different form, then it's hard to make concrete
statements about in any capacity, but ultimately the path will be started on
by humans, or possibly emerge from sufficiently complex systems.

Either way, we're so far out from this that it is laughable to speculate. When
we can simulate the intelligence of a cockroach, then maybe we can start to
think about what human-level intelligence will look like. The most likely case
is that at some point someone creates a system that is arguably conscious or
intelligent in a meaningful way, and then we will devolve back into
discussions of consciousness and identity and memory while we try to figure
out what it means to think about intelligence now that we have something to
benchmark against. I wrote this comment [1], that I still enjoy, in response
to a similarly alarmist narrative; posed as fiction in that case rather than
masquerading as journalism as in this one.

[1]
[https://news.ycombinator.com/item?id=15420699](https://news.ycombinator.com/item?id=15420699)

~~~
zzz_
We can already simulate the compete brain of a fruit fly:
[https://neurokernel.github.io/](https://neurokernel.github.io/)

~~~
twic
No we can't (my emphasis) [1]:

> Building an accurate emulation of the fly brain is an interdisciplinary
> effort that requires data, algorithms, and insight from (but not limited to)
> the fields of neuroscience, computer science, and systems engineering.
> Researchers interested in _working towards this goal_ are invited to join
> the Neurokernel project.

[1]
[https://neurokernel.github.io/about.html](https://neurokernel.github.io/about.html)

~~~
Animats
OpenWorm is still struggling to simulate the simplest organism with a nervous
system - the simplest nematode.[1] This is hard, even though the complete
wiring diagram of the nematode is known. Still, it's probably the most honest
project in synthetic biology.

[1] [http://openworm.org/](http://openworm.org/)

~~~
russdill
Speaking of S curves, progress in these areas is pretty slow (simulating
existing biological structures), I think it's likely they are on the bottom of
the S curve. The exponential growth phase is coming up and a lot will happen
really fast.

Curious if a deep learning application that builds neural networks from gene
sequences is a possibility one day. All the information is there, but it may
not be possible to learn what it builds without actually "running" it.

------
LaserToy
My biased opinion: worrying about super intelligent AI taking over the world
with current state of technology is equal to worrying about fighting an alien
invasion of our colonies in another galaxy.

Both are great topics for a epic book.

Had this discussion with my relative recently and recommend him to stop
watching YouTube and start reading books on Introduction to ML.

------
blunte
I'm not worried so much about "superintelligent" AI, I'm worried about the
growing trend of outsourcing human judgement to automated systems. Examples
include automatic flagging of content based on misjudged copyright issues,
automatic rejection of refund or other claims (especially where all subsequent
attempts at resolution also involve non-human decisions), etc.

We're not even to the point of superintelligent AI, but some major companies
(some which start with G, just as one example) already handle situations
incorrectly because of too much dependence on automation.

------
chrisfosterelli
> Surely, with so much at stake, the great minds of today are already doing
> this hard thinking—engaging in serious debate, weighing up the risks and
> benefits, seeking solutions, ferreting out loopholes in solutions, and so
> on. Not yet, as far as I am aware.

The author is missing or ignoring key contributors of work in this field. He
even mentions Nick Bostrom, who works with the FHI which does work in this
space [0]. The author's own organization does work in this space [1]. Deepmind
does work in this space [2]. "Safe artificial general intelligence" is the
literal mission statement for OpenAI [3]. The author may feel more
conversation is needed, but it feels disingenuous to suggest that nobody is
talking about this.

As for how much conversation _should_ be happening, my understanding is that
most people on the edge of the field view the _current_ risks of _existing_
artificial intelligence as significantly more pressing. We have the ability
right now to create dangerous weapons & massive facial recognition programs
with current AI. The longterm affects of biases in ML algorithms are still not
well understood. Some of this is already affecting us [4].

When it's not clear at all how we approach general intelligence, and many
experts fundamentally believe that our current approaches will not suffice
ever, I'd argue the current level of focus on general intelligence safety is
about where it should be. Smart people are thinking about this, but there is
also important issues _now_ to think about.

[0]: [https://www.fhi.ox.ac.uk/research/research-
areas/#1513087763...](https://www.fhi.ox.ac.uk/research/research-
areas/#1513087763365-e148efe6-2d23)

[1]:
[https://humancompatible.ai/publications](https://humancompatible.ai/publications)

[2]:
[https://deepmind.com/research?filters=%7B%22tags%22:%5B%22Sa...](https://deepmind.com/research?filters=%7B%22tags%22:%5B%22Safety%22%5D%7D)

[3]: [https://openai.com/](https://openai.com/)

[4]: [https://www.nytimes.com/2019/04/14/technology/china-
surveill...](https://www.nytimes.com/2019/04/14/technology/china-surveillance-
artificial-intelligence-racial-profiling.html)

------
reportgunner
> _Electronic calculators are superhuman at arithmetic. Calculators didn’t
> take over the world; therefore, there is no reason to worry about superhuman
> AI._

Well but they did overtake arithmetic.

> _Historically, there are zero examples of machines killing millions of
> humans, so, by induction, it cannot happen in the future._

Well there are also zero examples of a superintelligent AI that doesn't kill
humans, does that mean that it cannot happen either ?

> _No physical quantity in the universe can be infinite, and that includes
> intelligence, so concerns about superintelligence are overblown._

How is intelligence physical ?

> _we can always just switch it off_

Yeah, except at a time when we get so comfortable with an AI that turns things
on and off for us. Add an "innovation" of not being able to turn the AI off by
design (e.g. internet connection on SmartTVs nowadays) and there you go.

It's suprising to me that these people call themselves scientists, perhaps the
quotes were taken out of context or even paraphrased.

------
Animats
That's not the near-term scary scenario. The near-term scary scenario is a
machine learning system that's better than most CEOs.

What happens when someone develops a system that measurably makes more
profitable decisions than 70% of CEOs? That isn't totally out of reach.
Capital would flow to companies run by such systems. We could end up with
machine learning systems running the corporate world, based purely on better
ROI.

A good start on the problem would be to train on a collection of funding
proposals, such as old Kickstarter announcements. Look deeper than the hype.
Go out and suck up all the information you can on the founders, using credit
databases. (They're asking for money, so it's a legit credit query). Do an
automated background check. Then check on how the project did, five years
later. That's your training set.

~~~
deafcalculus
Too little data to train on?

------
mindgam3
> To my knowledge, this is the first time that serious AI researchers have
> publicly espoused the view that human-level or superhuman AI is impossible

Well, there's your first problem. Serious AI researchers have been publicly
calling bullshit on superhuman AI since at least 1987, when my old professor
Terry Winograd wrote the following in Understanding Computers and Cognition
(1):

"one cannot build machines that either exhibit or model intelligence behavior"

The idea that "no serious researchers are skeptical about AGI" is a total
fantasy perpetuated by people who haven o idea what they are talking about.

1\. [https://www.amazon.com/Understanding-Computers-Cognition-
Fou...](https://www.amazon.com/Understanding-Computers-Cognition-Foundation-
Design/dp/0201112973)

~~~
pixl97
>one cannot build machines that either exhibit or model intelligence behavior"

Right, show me the physics behind that one. Even smart people say
unsubstantiated dumb crap. This is one of those cases.

------
6gvONxR4sf7o
Without resorting to extinction events, it's not hard to imagine action-taking
models doing some really bad things. Maybe you have a contextual bandit making
loan decisions and it learns to make mostly predatory loans. Maybe your RL
agent decides to take an anticompetitive pricing scheme to drive out
competition. _Much_ later, we might even see our first full robo-business. If
it's uninterpretable, you might not even know what it's doing that until it's
been done.

The same research that prevents little problems should hopefully prevent big
problems. The research to take RL to the real world is progressing quickly. I
don't see a "kill us all" AI arriving soon, but "do bad things" AI and "wreck
the economy" AI are quickly approaching.

------
jhanschoo
All the talk and worry about AI seems to forget a discussion about the
contexts in which AI will be deployed in.

Already, conventional algorithms are trusted in society, sometimes beyond
reasonable or safe levels. The ethical questions posed by ethical AI design
and deployment are exactly the ones our present-day reliance on (possibly
poorly-designed) algorithms pose.

Before we worry about AI harming us by an improper inference, we can, in the
present moment, already worry about automated systems harming us by improper
inferences (e.g. improper financial models, Tesla / aircraft driving
assistance); the discussion on how to limit and selectively deploy AI is no
different than that for present-day algorithms.

------
mazsa
Review of the book by Max Tegmark: [https://www.amazon.com/gp/customer-
reviews/RVSAD5GWSLQ42/](https://www.amazon.com/gp/customer-
reviews/RVSAD5GWSLQ42/)

------
garyclarke27
A Corporation or Person that develops and owns an intelligent AI will have a
strong incentive to keep such secret and to not turn it off, because of the
huge economic benefits such would provide. If such an AI could imagine and
innovate to improve itself eg new methods of computation to overcome Moore’s
law then the risk is high of a recursive runaway exponential increase in
intelligence that would be extremely difficult to control and might not be
benign. I don’t believe this is imminent but is plausible I think within most
of our lifetimes ie 25-30 years.

~~~
Nasrudith
The only reason to keep it secret is irrational fears that it will somehow
kill us all with intelligence alone and that exponential increases may be done
solo and not with generations of bootstrapping infastructure and considerable
resources.

Otherwise replicating and leasing would rake in far more than even a magic "I
get everything right on the stock market" button in the same way ruling stone
age tribe gives less absolute wealth than a Manhattan Lawyer.

------
zwischenzug
'AI research is making great strides toward its long-term goal of human-level
or superhuman intelligent machines.'

This is what irritates me about articles like this. They state this (and then
this one then comes up with a bunch of weak arguments supposedly representing
the opposition) but provide no evidence of these great strides.

What we've had great strides in (as far as I know) is the application of
increasing computational power for specific applications. Which is great, but
is _not_ getting us nearer that goal.

~~~
satanspastaroll
It's more super-biased, than super-intelligent AI. More can be taken into
account, and more efficient methods for calculation (for ex. mixed-precision
FP) are available.

------
otakucode
>We are unable to specify the objective completely and correctly, nor can we
anticipate or prevent the harms that machines pursuing an incorrect objective
will create when operating on a global scale with superhuman capabilities.

This is obviously incorrect. If an AI system of superhuman mental capabilities
is installed on a system with no network connection and no physical
interfaces, we can completely control the harm is might create. It would be
unable to directly act, of course, requiring humans to authorize and carry out
whatever plan it devises. But failing to do this should be prosecuted and
treated no differently from if a person or company built a tank and then
permitted it to drive over and through people. The biggest difficulty here
will be that our legal frameworks have no established way to assign liability
and criminal culpability to any system involving software. With no legally-
enforceable industry standards (like the electrical code followed by
electricians, things like that) any prosecution fails as the company can
either claim ignorance of the potential harm their system might cause or
simply throw individual employees under the bus, claiming they were not acting
on direct orders.

It is true that we can not fully specify the goal, and that is a grave
concern. It's partially due to this that AI systems shouldn't be directly
connected to the ability to act in the physical world unless their attempted
actions go through some sort of 'filtering' first.

~~~
pixl97
>This is obviously incorrect. If an AI system of superhuman mental
capabilities is installed on a system with no network connection and no
physical interfaces, we can completely control the harm is might create.

First, why would someone build a system like that, and how would it actually
be 'smart'. If you lock an infant in a closed room, it will grow up to be a
literal retard. Intelligence requires interaction and data. Next, some idiot
will hook it to the net in a heart beat when they figure out they can use it
to make money on the stock market. Never underestimate human greed.

------
moron4hire
Will superintelligent AI "destroy" life as we know it? Yes, as pretty much
every transformative technology before it has. Cars destroyed "life as we knew
it". Computers did, too. We live in a world that is very different from the
world 1000, 500, or even 250 years ago. Even our economically disadvantaged
people live way past 40 years old. People survive debilitating diseases and
lead productive lives, rather than being tossed in a ravene. Natural disasters
don't wipe out entire civilizations (and at least in the 1st world, are mostly
only economic in their impact).

Yet at the same time, other things have stayed the same. We still care about
mostly the same things. Food, Freedom, and Fornication. And complaining about
youngsters. I don't think that is going to change.

The fears that General AI are going to kill off humanity are predicated on the
idea that General AI will get smarter than humanity. I posit a different
interpretation of what is more likely to happen. General AI will require
creativity to be able to outsmart humanity. And with creativity comes boredom.
And distraction. And opinion. And argument.

I think General AI will be too interested in entertaining itself to take over
the world. We've had _people_ try to take over the world, on occasion, and it
has largely not gone well for them.

But before that happens, people will start adapting AI to their lives, to the
point that the line between "human" and "machine" will be blurred. So life "as
we know it" will definitely "end". It will be replaced with a new form of
life, one that isn't limited by only what nature can provide. One that cares
about Food, Freedom, and Fornication.

------
croo
Current weak AIs can do one thing at a time like composing music or answering
jeopardy questions or recognizing a cat on a picture or playing go and won
against humans... I think the breakthrough will happen when someone manages to
create a weak AI which can write code.

~~~
AnimalMuppet
What kind of code? Code that satisfies a spec? Then the AI needs to be able to
understand the spec. Code that works? Then the AI needs to be able to tell
whether the code works. For either of those, I think we need more than "weak
AI".

Of course, a weak AI may be able to write syntactically valid code that
doesn't necessarily work. That's... not much of a breakthrough.

------
bronz
AGI is the most urgent existential threat. If you are concerned about AGI,
please join us at
[https://www.reddit.com/r/PeopleAgainstAGI](https://www.reddit.com/r/PeopleAgainstAGI)

------
tschellenbach
Glorified statistics at this point. It's very far away from any
superintelligent AI.

~~~
Tenoke
You can describe human intelligence as 'glorified statistics', too..

~~~
tschellenbach
See how good an adult is at chess after 10 games, compare that to a
supercomputer AI trained on 10 games.

~~~
Tenoke
This is completely beside the point, but I'm guessing an adult who has never
encountered chess but has played other games will be way worse after 10 games
than an AI pre-trained on other games and then trained on 10 chess matches.

------
magwa101
"Superintelligent" AI is so far away that we are wasting our time thinking
about it. In the meantime the economy is being completely turned over by very
narrow AI which should be our major focus.

------
robomartin
I am far more concerned about the very real potential for evolutionary forces
to decimate humanity inside of a few days, weeks or months.

While we worry about all kids of things at the macro level microscopic
organism continue to evolve. Any given Monday the blind clock maker could
deliver an invisible bug right into our laps, one for which we will have no
defenses.

Our interconnected world will mean this bug will easily jump population
centers.

The devastation we could see if such an event occurred could be of
unprecedented scale and breath. We could lose one quarter or more of the
population of this planet given the right conditions.

So, yeah, AI could, I guess, go rogue. Yet the real threat isn't anywhere near
what we can see and touch.

In fact, we might NEED real AI to save humanity when and if that super-bug
emerges. Think about that for a moment.

~~~
russdill
Can you point to any extinction level events from a "superbug" for a species
that was not already teetering on extinction? Seems like something that only
comes around once every few hundred million years.

~~~
etiam
How about this one?

[https://www.nytimes.com/2019/03/28/science/frogs-fungus-
bd.h...](https://www.nytimes.com/2019/03/28/science/frogs-fungus-bd.html)

[https://en.wikipedia.org/wiki/Chytridiomycosis](https://en.wikipedia.org/wiki/Chytridiomycosis)

~~~
russdill
Not a new pathogen, just a pathogen given more range than before.

------
PaulRobinson
Nitpick: the link title is jarring. Either take the headline of the piece
being linked to, or if you insist on forming a question that begs us to invoke
Betteridge's law of headlines, at least move the "are" to the start of the
sentence. This is grammatical gibberish.

------
iamasoftwaredev
Why does the post have a question mark? The title is a statement.

"Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong"

I dunno, I might listen to the AI experts on this one.

------
RHSman2
Humans will be wiped out/resources will be heavily reduced before this can
become a reality.

------
0xdeadbeefbabe
And the superintelligence should worry about mental illness.

~~~
moron4hire
Right. We apply some of the best minds and most powerful compute resources in
the world to _predicting the weather_ and yet can't get it right within a
reasonable degree for much more than a week. For an AI to try predict what
humanity will do to stop it will surely look a lot like paranoia.

~~~
0xdeadbeefbabe
> Switching the machine off won’t work for the simple reason that a
> superintelligent entity will already have thought of that possibility and
> taken steps to prevent it.

Also the statement presumes that self preservation is the super intelligent
thing to do.

~~~
johnfactorial
Also the statement presumes the superintelligent entity will be capable of
taking all possible steps to prevent this. Imagining a machine capable of such
steps quickly devolves into imagining a machine already in total control of
the planet, which is begging the question.

