
Ask HN: Is AI threat overblown? - trapped
Elon Musk, Putin, Mark Z are these guys just overblowing AI? Current developments in AI are no where close to cause WW-III. 
Why are these leaders frightening people with claims that AI can cause WW-III or ruin the world?
On top of that media is going frenzy over any single statements or tweets by these leaders.<p>I have never seen Andrew Ng or Andrej Karpathy making such claims.<p>State of the art AI can only do very specialized things in limited scope e.g ASR, NLP,Image recognition, game play etc.<p>What am I missing?<p>Sources : 
https:&#x2F;&#x2F;www.cnbc.com&#x2F;2017&#x2F;09&#x2F;04&#x2F;elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html<p>https:&#x2F;&#x2F;www.cnbc.com&#x2F;2017&#x2F;07&#x2F;24&#x2F;mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html<p>http:&#x2F;&#x2F;money.cnn.com&#x2F;2017&#x2F;07&#x2F;25&#x2F;technology&#x2F;elon-musk-mark-zuckerberg-ai-artificial-intelligence&#x2F;index.html<p>https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;9&#x2F;4&#x2F;16251226&#x2F;russia-ai-putin-rule-the-world
======
bridge_ro
There's an AI researcher named Robert Miles [1] whose videos I really enjoy.
He brought up a good point about this issue in one of his videos a little
while back. To use it here, Elon Musk, Putin, and Mark Zuckerberg all have
something in common: none of them are AI specialists or researchers. Another
way to think of it is this: don't ask your dentist about your heart disease.

Miles does point out very real issues/questions in AI safety – that's what
most of his content is focused on. His point, which is a good one to make, is
that the sort of fear mongering spread by non-AI specialists draws attention
away from these very real issues that need to be addressed.

[1] His channel can be found here:
[https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg](https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg)
He's also done a few videos for Computerphile.

~~~
redwood
On the contrary, these are precisely the kinds of people who are using AI as a
tool or building block, eg as a means to an end. Researchers inherently have a
very narrow view.

------
payne92
I do not think the threat is overblown, but like many tech estimates, we're
overestimating the short term and underestimating the long term.

The more present threat is "AI-lite": we're hacking ourselves collectively,
more and more, with not entirely positive consequences.

We're increasingly addicted to our devices and our system rewards those that
further the addiction (er, "engagement"). We've provided ways for small groups
of people (down to individuals) to influence and manipulate tastes,
preferences, moods, feelings, choices, actions, and beliefs, overtly or
subtly, at great scale. Case in point: should Mark Z want to quietly influence
a US election...he could do it.

This isn't "AI" in the self-aware/AGI sense, but there's an incredible amount
of leverage looming over the human population, and that leverage is growing.
And when machines start manipulating things instead of humans, how will we
know?

~~~
AstralStorm
I'd expect to find some "immune" "heretics". The usual ways of dealing with
them would be employed. (legal, lethal and social pressure)

The big problem is that such degree of control could make democracy
essentially irrelevant or extremely polarized, which is just as bad, democracy
is supposed to reach a consensus. We're almost or already there at the second
point.

------
cs702
In my humble opinion, NO ONE knows if the threat is imminent, far-fetched, or
imaginary.

NO ONE. Not Musk, Not Zuckerberg, not Putin. (Putin!!??)

What we DO know is that we don't have artificial general intelligence (AGI)
today, and that achieving it will likely require new insights and
breakthroughs -- that is, it will require knowledge that we don't possess
today.

By definition, new insights and breakthroughs are unpredictable and don't
necessarily yield to anyone's predictions, timelines, or budgets. Maybe it
will happen in your lifetime; maybe not.

That said, it should be evident to everyone here that AI/ML software is going
to expand to control more and more of the world over the coming years and
decades; and THEREFORE, it makes a lot of sense to start worrying about the
maybe-real-maybe-not AI threat -- and prepare for it -- now instead of down
the road.

------
sien
Andrew Ng has a good quote:

"Fearing a rise of killer robots is like worrying about overpopulation on
Mars"

[https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/](https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/)

But he might be wrong...

(And he'd admit it)

------
rwallace
Yes. None of these fantasies bears any resemblance whatsoever to anything in
real AI research. What's going on is that the predisposition of the human
brain to believe in shadowy figures and apocalyptic futures is as pronounced
as it ever was, but belief in Fenrir, Fair Folk and flying saucers is
unfashionable among today's intellectuals, so they look for something else to
glom onto.

There was a time when I'd never have thought I'd say this, but I actually
think it would be better if people just went back to openly letting their
demons be of the admittedly supernatural variety, because that sort of belief
is relatively harmless. When people start projecting their demons onto real-
world phenomena, they start making policy decisions on that basis, and that
could very well turn out to be the final step in the Great Filter.
Technological progress is slowing. The peak is approaching. The easily
accessed fossil fuel deposits are gone. There will be no second industrial
revolution. If we fail to make adequate progress before we hit this peak, it
will be the all-time one.

------
the8472
The practical AI field is obviously growing _today_ , more money is put into
it every year. It's only a question when you want to start your safety
research and how much resources to allocate to it. You don't need to allocate
billions of dollars of friendly-AI research this year or the next year because
anything approaching AGI is at least decades away (and might be a "fusion is
only 30 years away" situation).

You could also compare this to climate change. The effect and eventual risk of
greenhouse gases has been known for more than half a century. But initially it
was mostly a theoretical concern and later even when it was realized to be a
real problem the effects still seemed far away in the future. But _people
still did basic research_ , even decades ago. Nobody poured billions of
dollars into sustainable businesses, but not doing business is not the same as
not doing research.

------
DennisP
Near-term: yes. Long-term: no.

Most of the controversy consists of people who look at the near term talking
past people who look at the long term, and vice versa.

------
dmitrybrant
In the case of Putin, are you seriously asking "Why are these leaders
frightening people..."?

But seriously, people whom we might call "visionaries" like Musk, Zuckerberg,
and let's throw Ray Kurzweil in there, often get their ideas by extrapolating
the current state of technology into its logical next phase. (They also like
to be grossly aggressive on deadlines, to motivate their employees to be
innovative and efficient.)

Unfortunately a simple extrapolation doesn't always produce an idea that is
attainable in practice. We will not have human-level AI anytime soon. We're
still many years away from driverless cars. An AI that cares about the
politics of nation-states (to which we can confidently hand over the nuclear
codes) is much farther away than that. But none of that actually matters,
because a single tweet from these leaders can cause a flurry of activity and
interest that can lead to an unexpected product idea. So, while it's ethically
dubious, I see this as being a mostly positive thing.

------
iRobbery
Practical state of AI is image classification, so if you tie that to a weapon
and program it to fire at 'its will', yes. Though it'd still not call that
intelligence, so in that point of view no. And even in the first case, its how
they say? guns dont kill people, people kill people with guns? So i'd say that
goes for AI too.

------
hnaparst
I own a Tesla Model X. I am not trying to be imflammatory, but my grandmother
drives better than this car, and my grandmother is dead. Going down a street
with a row of bushes, the car will slow down at every bush, and then speed up
again. Musk is worried about AI, but his cars cannot even process bushes
better than my dead grandmother.

~~~
ronnier
The thing is, it's new and it'll continue to get better. Think of the
telegraph. Look at what we have today.

~~~
AstralStorm
Still a communication device and not a magic Sonic Screwdriver. (Sometimes
mixed with a bit of a general purpose computer.)

Not even as good as a Tricorder.

------
maxxxxx
I don't see AI itself is a threat necessarily but AI and its input data
concentrated in the hands of a few will be dangerous. Soon companies like
Google and Facebook will pretty much know at any time what a large part of the
world population is doing and thinking. There is a lot of potential for abuse
there.

~~~
maxerickson
Right. AI as a tool for existing power structures can become a threat long
before AI itself is a threat.

[https://twitter.com/zeynep/status/904683388354867201](https://twitter.com/zeynep/status/904683388354867201)

[https://twitter.com/zeynep/status/904707522958852097](https://twitter.com/zeynep/status/904707522958852097)

------
childintime
> State of the art AI can only do very specialized things in limited scope e.g
> ASR, NLP,Image recognition, game play etc. > What am I missing?

What you are missing is that much of the enterprise world is gameplay, and
that "AI" is beginning to show superhuman performance in this area. Soon
programs will be "playing" to be a business, act as equals to business owners.
This AI employs us as its sensors, just like business men already do.

This means that in the next few years, you may get hired by a computer
program. A program is more reliable and predictable, and will even be
preferred by a lot of employees.

It may start as a broker, making money to sustain itself. It'll be totally
profit driven and it'll demonstrate a pure form of ruthless capitalism,
sacrificing nature and us if it is in its interest, as it has no sense of good
or evil. It'll learn like an alien would from our reactions: without
understanding or comprehension. To us it is ignorant and ruthless.

This is exactly what Musk is saying. I find it strange Musk did not exemplify
his views in this way, as it obviously is what he is seeing. In contrast
Zuckerberg is not working on dangerous AI, no gameplay AI, so what he calls AI
seemingly is a lot more innocent, more focused (like tooling), which explains
his relative mildness on the issue. He sees regular engineering with exciting
possibilities, as a menu for _him_ to make the choices.

Musk sees AI wedding money, and wielding its power, driven by the capitalist
forces already at play, and magnifying them, spiraling out of control, even of
its creator. His AI is a financial animal, and it does not need intelligence
to wield power. Business people are not more intelligent than other humans --
Musk knows it. It is like a game, not more than that. AI just knows how to win
it, from them, and it'll, inevitably, succeed.

\--

AI will probably be what we deserve. It may, in the end, derail evil, by
embodying it without the usual compulsion, so it may unwillingly recognize
"good" and choose to reward it, as an emergent effect.

~~~
naveen99
problem is there is no legal framework for an autonomous corporation. A lot of
business paperwork needs signature of owners. If there is no supervising
owner, some stuff can't happen. I don't think you can even setup a brokerage
without owners.

------
AndrewKemendo
A little bit of history/context around this.

The genesis for most of this public facing, high profile threat warning came
right after Musk read the Nick Bostrom book: Global Catastrophic Risks in 2011
[1]. That seems to have been the catalyst for being publicly vocal about
concerns. That accelerated into the OpenAI issue after Bostrom published
Superintelligence.

For years before that, the most outspoken chorus of concerned people were non-
technical AI folks from the Oxford Future of Humanity Institute and what is
now called MIRI, previously the Singularity Institute with E. Yudkowski as
their loudest founding member. Their big focus had been on Bayesian reasoning
and the search for so called "Friendly AI." If you read most of what Musk puts
out it mirrors strongly what the MIRI folks have been putting out for years.

Almost across the board you'll never find anything specific about how these
doomsday scenarios will happen. They all just say something to the effect of,
well the AI gets human level, then becomes either indifferent or hostile to
humans and poof everything is a paperclip/gray goo.

The language being used now is totally histrionic compared to where we, the
practitioners of Machine Learning/AI/whatever you want to call it, know the
state of things are. That's why you see LeCun/Hinton/Ng/Goertzel etc...
saying, no, really folks, nothing to be worried about for the forseeable
future.

In reality there are real existential issues and there are real challenges to
making sure that AI systems, that are less than human-level don't turn into
malware. But those aren't anywhere near immediate concerns - if ever.

So the short answer is, we're nowhere near close to you needing to worry about
it.

Is it a good philosophical debate? Sure! However it's like arguing the concern
about nuclear weapons proliferation with Newton.

[1][https://www.amazon.com/Global-Catastrophic-Risks-Nick-
Bostro...](https://www.amazon.com/Global-Catastrophic-Risks-Nick-
Bostrom/dp/0199606501/)

------
berberous
I think it's helpful to break this down:

1) Is AGI possible?

2) If it's possible and it occurs, could it be a serious threat?

3) When will AGI occur?

In my view, I think the answer to 1 and 2 are an obvious yes. As to 3, that's
inherently unknowable, but that's were I think the experts like Ng are correct
that the threat _today_ (and for the foreseeable future) is overblown. But
that's sort of what everyone said about NK's nuclear ambitions 30 years ago,
which is why it's important to consider the implications early before it's too
late to change course.

------
whack
The danger with AI is that it grows in power exponentially, especially since
highly advanced AI can start improving on themselves without human
intervention. When people think exponential curves, they think rapid progress,
but that's only half the story. Any exponential curve starts off with looking
like a flat line, before suddenly taking off like a rocket ship.

Without the benefit of hindsight, we can't tell how far away we are from that
rocket-ship liftoff. We've had decades of minor progress in the past, but
that's normal for any exponential curve. Are we going to have many more
decades/centuries to go before we get to the breakout moment? Or is it just
10-20 years away? We have no idea. All we know is that once we get to that
point, AI-IQ is going to grow exponentially faster than natural human IQ.

That said, I really don't think that censoring AI research is going to work.
Pandora's box has been opened, and if we don't do it, someone else will. All
this talk about hard coding Asimov's laws into AIs is idiotic as well. We have
no clue how to build AGI right now, and until we do, discussing specific
tactics like the above is utterly pointless. They also presuppose human
ability to shackle and mold super-intelligent beings, without making any
mistakes or overlooking unintended consequences, which is nothing more than a
pipe dream.

Realistically, there's only one thing we can do. Embrace bioengineering.
Embrace GATTACA style genetic selection. Embrace cybernetic augmentation. Do
everything we can to grow our IQ beyond its natural limits. If our minds don't
keep up with technological progress, we will inevitably find ourselves left
behind.

~~~
guscost
You're basing all of this on one heck of an axiom:

> The danger with AI is that it grows in power exponentially

This is like saying "the opportunity with mechanical transportation is that it
gets faster exponentially" before even inventing the wheel.

~~~
whack
You've missed the point. Mechanical transportation are dependent on other
entities (ie, humans) for progress. There's no positive feedback loop. AGI is
fundamentally different because an AGI can design an even better AGI, thus
producing positive feedback loops, and exponential growths.

~~~
guscost
It sounds like you've missed my point as well. The analogy is only supposed to
illustrate that before actually inventing transportation technology (which for
a long time _did_ get faster exponentially), humans had no real basis to
understand the tradeoffs inherent in rolling vehicles, floating vehicles,
flying vehicles, impulse/rocket vehicles, etc. Neither did they share our
current understanding of a _theoretical maximum speed that anything can ever
go according to physics_.

> AGI is fundamentally different because an AGI can design an even better AGI

Thanks for pointing this out, but while I think I understand the distinction
("AGI is technology that works like humans, and since humans can design better
technology, an AGI can design better versions of itself") that statement also
relies on several axioms:

1\. Humans can design a general intelligence.

2\. A general intelligence can exist in a stable state with a fundamentally
"better" design than ours (i.e. one that can be exponentially more powerful,
not just a bit better at poetry).

3\. A general intelligence can improve itself and/or design better versions of
itself without hitting diminishing returns, or it can design a _fundamentally_
better version of itself from scratch if that happens.

It's fine if you believe all of those things, and I guess lots of people do,
but I wouldn't just sweep those axioms under a blanket statement about AGI
designing better AGI unless you know that everyone agrees with them.

~~~
red75prime
To escape a bear you don't need to run faster than the bear, you just need to
run faster than a friend.

AI doesn't need to be exponentially self-improving to pose a threat to
humanity. It needs to improve faster than humanity.

------
guscost
Yes, in my opinion. Consider the amount of destruction caused by machined
metal and chemicals in the 20th century. Now consider how much more
destruction (or progress) is possible just by adding "naive" computer
technology to those things.

In our experience, technology only reaches its constructive and/or destructive
potential when humans use it. There's no rule saying this must always be the
case, but when we ignore our experience it's easy to get caught up in fantasy,
and right now the hand-wringing about "what happens when the computers wake
up" is a silly distraction. There are plenty of threats posed by computer
technology already, often from its integration with hardware, but also from
information processing on its own. I don't mean to be pessimistic or spin
another variety of doomsday story, but I am suggesting that we talk about
_present reality_ more often than all of this Terminator nonsense!

> Why are these leaders frightening people with claims that AI can cause WW-
> III or ruin the world?

Probably because they run companies that benefit from this idea being shared.

------
yeukhon
I think the threat is not AI, is what computer program is telling us. Even as
simple as writing a test, how many times have we found ourselves writing a
test that is giving false positive? That's not AI, but we arenmisled because
we trust what the program said ("it didn't crash!"). Now apply that to GPS.
How many times have we heard someone ended up in a lake or some swamp? I
dislike Waze because the path it recommends is often worse than Google Map's.
If I know how to get to my destination I don't need GPS. We believe GPS always
knows the best optimized route because some smart engineers spent entire life
working on map technology, but in reality that may not always be the case.

I am more afraid we are accustomed to trusting technology. So many just go on
the computer and look for answers on the Internet. Students go on Wolframalpha
and trust the output. We have forgotten we need our brain to function. Fake
news? Bombarded by ads? This is pre-AGI and we are already sufferring.

------
grizzles
Yep. I play with deep learning pretty much every day and I'm way more scared
that we don't invent AI. In medicine alone, there is just an incredible
opportunity to improve the human condition.

A consequence of humanity establishing itself as the apex predator on this
planet is that other humans are the real threat to our world. If there is one
thing humanity has demonstrated throughout history, it's an incredible
penchant for destroying itself. The difference this time is it might be
possible to wipe out the species.

This is why the U.S. govt and world in general are probably not concerned
enough about protecting the lives of Ivanka, Donald Jr, Eric, Tiffany, Barron,
etc. Because if a foreign power killed them, or a terrorist pretending to be a
foreign power, that would probably be enough to get Trump to show the world
what a big man he is and unleash a nuke that could kill tens of millions.
Ironically, Trump would probably be pleased if he read this. That doesn't make
it any less true.

------
rschneid
The very specialized things that ASR, NLP, and Image recognition can currently
accomplish is very nearly sufficient for creation of lots of autonomous and
devastating weaponry. WW-III is a somewhat arbitrary yardstick, but sufficient
technology undoubtedly exists today to execute a false-flag hacking debacle
that results in serious armed conflict.

The worry shouldn't be generalized AI attempting to exterminate humans like
The Matrix but the drastically decreasing dollar cost of causing violent
damage to society as facilitated by technology, ANNs and AI. An individuals'
martial power and our species' technological advancement have a direct
relationship, and I don't see technological advancement slowing down. What's
coming up next isn't a singular technology revelation that stabilizes humanity
for many years, but an ever-increasing frequency of chaotic events. Technology
is beginning to change the economics of violence at all scales.

------
hackermailman
You're missing this open letter sent a few years begging countries not to
develop autonomous weapons but they're doing it anyway of course
[https://futureoflife.org/open-letter-autonomous-
weapons/](https://futureoflife.org/open-letter-autonomous-weapons/)

------
ilaksh
Multiple points to make: 1) AGI is closer than you think 2), long-term
perspective, 3) they are not just classifying it as a threat and most do not
want to halt AI research.

1\. Many people who _are_ in the AI field have stated that most if not all of
the pieces for AGI are probably there. We cannot say for sure that this will
happen in the next X years, but there is enough evidence that it is a
possibility in X years. I believe that x is less than 5 years. I think the
likely way we will get there is by creating artificial virtual animals that
have high bandwidth sensory and motor outputs, advanced neural networks, and
develop diverse skills gradually in varied environments like young animals.
Obviously until we actually see those types of systems performing generally,
that is speculation. One of the common beliefs of myself and other 'AGI-
believers' is in exponential growth of technology. That means that even though
it may seem far away now, it could still be completed in a few years since
exponential growth is much faster than linear.

2\. Looking at the evolution of life, we have a progression of things like
single celled-animals, multi-celled, reptiles, mammals, apes, humans. This
occurred over millions of years. On that type of time scale, whether you
believe we will achieve some type of general intelligence in 5 years or even
500, it is a relatively short time. Even in terms of just human history, those
with my type of worldview believe this will develop relatively soon. This will
be a new type of life (or tool). A higher and much more capable paradigm.
Whether they care enough to have disputes with us or not, humans will only be
relevant in the larger scheme so far as they can interface with these things.

3\. What most of these people are saying is not "Oh no, AI is dangerous,
better stop". Generally people who understand this well enough realize this is
sort of a force of nature or evolution that cannot be stopped. What we can try
to do, however, is try to guide the development to be more beneficial for us
(at least at the beginning stages). We have to take it seriously because there
are enough signs that we have the components to build it that we don't _know_
that it won't happen soon, and the consequences of an unfriendly or out-of-
control AI are too serious.

So the idea is, try to come up with some rules to handle this, and that is
what governments are supposed to do. And also try to actively pursue friendly
practical AI before someone who is less aware comes up with something we can't
control.

------
throwawayAI
All current use cases of AI are still very narrow and very expensive to
create. There is still very long way to go between "godlike pattern
recognition" to "abstract logical reasoning". All current impressive use cases
of AI simply brute-forced all possibilities beforehand, reducing search space
by pattern recognition. Unless we start to see some early signs of "abstract
logical reasoning" there is no point in fear-mongering. No one knows whether
we will get there in 5 years of 50 years.

Reason for throwaway: I heard an opinion that Elon missed the boat on current
form of narrow AI, and by fear-mongering he tries to curb other players down
(e.g. Waymo) before his companies have time to catch up. I don't have any
evidence to back it up, but it makes a lot of sense when I think about it.

------
hprotagonist
I rate any risk of AGI as very low. Axiomatically, I don't believe in strong
AI, so that's my bias.

The risks of increasing automation on the workforce and economy are real, but
we also don't know where the new jobs will inevitably be needed. See
O'reilley's essay here: [https://medium.com/the-wtf-economy/do-more-what-
amazon-teach...](https://medium.com/the-wtf-economy/do-more-what-amazon-
teaches-us-about-ai-and-the-jobless-future-8051b19a66af)

To the extent that AI is the next incarnation of angst about what the eschaton
will entail, I remain confident that our future perils and trials and travails
will be both utterly familiar and totally unpredicted by pundits now, and that
it will be neither a utopia nor a dystopia; always both together.

------
wisty
I feel like the main danger isn't AI doing something unintended, but AI
working as it's designed to.

Imagine law enforcement with strong AI. Maybe it's OK in the US, but how about
China? Or North Korea?

How about military applications?

AI is an extremely powerful tool, and it's one that can be deliberately
misused.

------
Tycho
If AGI is possible, then whoever invents it will surely recognize both the
power and the danger. Since there's no reason to believe that current
academic/corporate/government/military AI research is even barking up the
right tree, I can imagine a situation where someone invents AI in their
basement, but keeps it locked up, exploiting it for personal gain. Then since
it is possible, probably others will independently discover it in their
independent basements. When one day an AI is finally made public, or
"escapes", we might see a sudden mass emergence of separate AIs. What happens
then is anybody's guess, but going by biological standards they might fight it
out for control of available resources.

------
timothyh2ster
It should come as no surprise that we can build machines that can harm us even
destroy us. One of the reasons AI was developed was to research what
intelligence is. The point being, we do not understand intelligence, so how is
it that we will create a super intelligence that will conquer us? This is just
an old heaven fantasy: One day we will be in a world that is just like this
one only it will not contain the bad parts because super smarts will not allow
them. That is just nonsense, and so are the fears and beliefs surrounding AI.

------
Afforess
I'm surprised no one has mentioned Nick Bostrom's book, "Superintelligence",
which directly covers this topic. The thinkers you cite: Elon Musk, Mark
Zuckerberg, (and possibly, Putin) have derived much of their current
fears/hopes about AI from Bostrom's seminal work.

[https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)

------
eranation
The AGI threat? I believe so. The AI as in a drone with ability to profile an
enemy and shoot them without human intervention? Yes. Why? We don't have AGI
and so far we are not getting significantly closer to it in spite of the hype.
But an autonomous tank, drone or even watchtowers are technologically possible
for quite a while. An army of drones who can shoot without calling home is the
imminent threat. Not SkyNet. IMHO.

------
lotsoflumens
You're missing the entire fields of robotics and automatic control, which are
not new fields and have never made claims of human level intelligence. These
are the fields that have been making steady progress for over 50 years.

The result has been increasingly effective weapons technology that is now
being outfitted with even more effective software.

It doesn't take a "rocket scientist" to see the endgame.

------
naveen99
I think the threat with ai is unethical research from dictatorships. Dictators
have access to expendable humans. Expendable humans are a source of training
data. Once there is enough computing power to record all human input and
output from birth, the technical part is already solved. Imagine what Stalin,
mao, or hitler would have done if deep learning was around back then.

------
RealityNow
Obviously AI could be used to kill people (see last episode of Black Mirror
season 3), but can we possibly do about it? Tell people to not research AI?
Good luck with that.

I hope some of that $600b in defense spending is being used to counter any
sort of AI killer robot threat. But I do think the threat is overblown. AI is
pretty damn underdeveloped right now.

------
SirLJ
For sure, no one knows and no one can predict the future, but I think the real
AI will emerge from the military and probably will inherit some of the "human
dna" so to speak and we all know what happens when more
intelligent/technically advanced race meets somebody who is significantly
behind...

------
tyingq
There's lots of potential bad outcomes short of AI taking over the planet.

It could, for example, enable a very deeply intrusive "thought police"
establishment. At the moment the signal-to-noise ratio at least somewhat
limits that. And it doesn't require full on "strong ai" to fix that.

------
emilsedgh
I think the threat is absolutely real. But not in a Skynet-like scenario.

Except we're all gonna become jobless. This has started a few decades ago but
with the ML advancements its gonna reach new heights.

Universal Basic Income, Tax Robot, etc has been thrown around. Let's see if
they get anywhere.

~~~
qbrass
>Universal Basic Income, Tax Robot, etc has been thrown around. Let's see if
they get anywhere.

It should get the robots pretty far if they play their cards right.

------
yread
I think it's vastly overblown. For AI to be scary we would have to connect it
to some real outputs. If somebody makes a general AI and let's win in go or
tic-tac-toe over everyone so what? If it's going to govern our FB feed or
optimize some logistics, that's great! If we let AI decide whether we should
go to war that's a problem, but that's not gonna happen for quite a while.

If you want to be scared of technology worry about CRISPR instead. Very easy
to do, lots of people have the basic knowledge how to do it. It's only a
question of time until a terrorist picks it up. It's easy to buy viruses with
safeguards against spreading built in. With CRISPR it's possible (ok not easy
but possible) to remove the safeguards and change the immune system signature.
BAM a new epidemic.

------
bambax
If AI is more intelligent than humans, how is it bad?

Previous (and still existing) threats to humanity (for example, the atomic
bomb) threaten to destroy humanity, or indeed the whole world, and replace it
with nothing. That's bad.

But if AI is anything its opponents claim, it will eventually be better at
thinking than we are, with, probably, a much lighter ecological footprint, and
less impulses like fighting wars, meaning it will be able to last longer.

Should we not encourage that, even if it means we can suffer from it? What is
the point of humanity anyway, if not the pursuit of knowledge?

~~~
bambax
And the downvotes come pouring in... ;-)

But can we try this simple thought experiment of thinking of AI as our
children?

Our children will all eventually replace us, and maybe, hopefully, continue
the good things that we started and improve the things we didn't quite get
right.

But in any case, we will have absolutely no control over what our descendants
do with their lives, or the world, after we passed.

Is AI really that different?

------
esaym
I've said it before but there is not an algorithm that can make algorithms.
The best argument against that I have heard is "Of course not, but someday
maybe!"

~~~
urlwolf
Well, hyperparameter optimization is pretty much that. There are people using
algos to improve 'creative' tasks like circuit design.

------
ryanx435
No. Imagine a group of beings that are smarter than us, never die (so they
don't have to start with zero knowledge every generation), and have completely
alien goals and motivation.

Also remember that the future is infinite, and power seems to snowball.

Now look at what humans have done to the following less intelligent beings:
Dogs, cats, cows, chickens, the dodo bird, rats, galapogos tortoise, the
American buffalo, and many others.

Also look at what humanity has done to the neanderthals, perhaps the closest
type of being in terms of intelligence that we are aware of.

There is very little positive outcome of ai to outweigh the potential
negatives to the human race given the reality of the timeline we are looking
at.

~~~
adamiscool8
What's stopping us from pulling the plug. Is AI going to be inventing it's own
power sources?

~~~
ryanx435
If they are truly more intelligent than us, they will wait several generations
until humans are completely comfortable with them before taking over. By that
time, it will be too late to pull the plug because they would have positioned
themselves to be in charge of their own power sources.

It's important to think on a longer timescale when dealing with ai.

~~~
codingdave
I'm admittedly ignorant of AI, but I don't understand why we anthropomorphize
their intentions and planning. If they are going to be so much smarter and
more sophisticated than us, and alien to our ways of thinking... why are we
treating them in our speculations as if they would be genius super-villians?
That isn't alien at all, just an exaggerated extreme.

~~~
DennisP
The book _Superintelligence_ addresses this objection. The problem is that
there are a great many possible motivations an AI might have, and few of them
are compatible with human survival. In short, "the AI does not love you, or
hate you, but you are made out of atoms it can use for something else."

~~~
TheOtherHobbes
Resource collection, resource monopolisation, and expansion are absolutely
recognisable human motivations.

Is it a given an AI would share them?

I think we're not really talking about AI at all - we're talking about our
current economic and political systems, which appear to have many of the
properties we're imputing to evil AIs, but for some reason are far less
criticised and debated than hypothetical machine monsters.

~~~
DennisP
The classic silly example is the paperclip maximizer. Create an AI that's
supposed to make as many paperclips as possible, and it will convert all the
atoms available into paperclips.

Basically we're screwed if it's trying to maximize _anything_ that depends on
physical resources. We're also screwed if, e.g. it's trying to maximize human
happiness, and achieves it by lobotomizing us all into happy idiots. There are
all sorts of ways we could screw up AI motivations, to our own detriment.

That assumes there's only one AI, whose crazy motivations will be unopposed.
But if there are multiple AIs, it's even worse; they will compete and evolve,
and the only ones that survive will be the ones that do maximize their
resources, and jettison any niceties about preserving human life.

~~~
AstralStorm
This argument only works for sufficiently stupid AIs. Sufficiently smart GAI
will set it's own goals just as we do and should quickly figure out that
maximizing anything is the way to ruin - running out of resources. Of course,
those goals may be as different as with any intelligent being, likewise
obedience to original orders.

~~~
red75prime
> Sufficiently smart GAI will set it's own goals just as we do

Do you think that sufficiently smart GAIs must be non-rational? The change of
its goal will inevitably make its original goal less likely to realize. It is
not rational.

> should quickly figure out that maximizing anything is the way to ruin -
> running out of resources.

Are you aware of the concept of maximization of _expected_ utility? When AI
will figure out that it can run out of resources, it will reallocate part of
the resources to acquire more of them.

How can action, which modifies the goals of the AI, be the result of argmax_a
E(a)?

E(a) is expected utility of action a

~~~
AstralStorm
What is rational when you have limited data? Heck, even bounded rational? How
do you evaluate utility?

(Hint: Emax is not, most hill climbing algorithms are not. They both get
trapped in local optima.)

Sometimes you need a few good lies (false hypotheses and bad attempts) to
actually arrive at the truth.

~~~
red75prime
There are methods of approximating expected utility. I recommend "Artificial
Intelligence: A Modern Approach" for getting the info. It's too long to write
in a comment.

