
Superintelligence: The Idea That Eats Smart People - pw
http://idlewords.com/talks/superintelligence.htm
======
alexbecker
While I agree with Maciej's central point, I think the inside arguments he
presents are pretty weak. I think that AI risk is not a pressing concern even
if you grant the AI risk crowd's assumptions. Elided from
[https://alexcbecker.net/blog.html#against-ai-
risk](https://alexcbecker.net/blog.html#against-ai-risk):

The real AI risk isn't an all-powerful savant which misinterprets a command to
"make everyone on Earth happy" and destroys the Earth. It's a military AI that
correctly interprets a command to kill a particular group of people, so
effectively that its masters start thinking about the next group, and the
next. It's smart factories that create a vast chasm between a new, tiny
Hyperclass and the destitute masses... AI is hardly the only technology
powerful enough to turn dangerous people into existential threats. We already
have nuclear weapons, which like almost everything else are always getting
cheaper to produce. Income inequality is already rising at a breathtaking
pace. The internet has given birth to history's most powerful surveillance
system and tools of propaganda.

~~~
modeless
Exactly. The "Terminator" scenario of a rogue malfunctioning AI is a silly
distraction from the real AI threat, which is military AIs that _don 't_
malfunction. They will give their human masters practically unlimited power
over everyone else. And AI is not the only technology with the potential to
worsen inequality in the world.

~~~
otakucode
Human beings have been extremely easy to kill for our entire existence. No
system of laws can possibly keep you alive if your neighbors are willing to
kill you, and nothing can make them actually unable to kill you. Your neighbor
could walk over and put a blade in your jugular, you're dead. They could drive
into you at 15MPH with their car, you're dead. They could set your house on
fire while you're asleep, you're dead.

The only thing which keeps you alive is the unwillingness of your neighbors
and those who surround you to kill you. The law might punish them afterward,
but extensive research has shown that it provides no disuasion to people who
are actually willing to kill someone.

A military AI being used to wipe out large numbers of people is exactly as
'inevitable' as the weapons we already have being used to wipe out large
numbers of people. The exact same people will be making the decisions and
setting the goals. In that scenario, the AI is nothing but a fancy new gun,
and I don't see any reason to think it would be used differently in most
cases. With drones we have seen the CIA, a _civilian_ intelligence agency,
waging war on other nations without any legal basis, but that's primarily a
political issue and the fact that it can be done in pure cowardice, without
risking the life of those pulling the trigger, which I think is a distinct
problem from AI.

~~~
chrshawkes
That's not true, if I know my neighbors are coming I'm loading my Remington
870 and I'm waiting for that door to open. In America we're allowed to bear
arms for protection.

~~~
pc86
You are missing the point. If everyone around you wants you dead and willing
to do it, you're going to die. If the CIA knows of a terrorist camp and wants
to kill the people there, they are going to die.

AI doesn't change this it just makes it easier.

~~~
andrepd
Everyone wanted Bin Laden dead, but it still took a few years to manage that.
So perhaps not as straightforward.

~~~
generj
Not everyone wanted Bin Laden dead. Critically, the people he was hiding with
and his organization as a whole very much wanted him alive. The people who
actually surrounded him on a daily basis did not want to kill him.

------
apsec112
"I live in California, which has the highest poverty rate in the United
States, even though it's home to Silicon Valley. I see my rich industry doing
nothing to improve the lives of everyday people and indigent people around
us."

This is trivially false. Over a hundred billionaires have now pledged to
donate the majority of their wealth, and the list includes many tech people
like Bill Gates, Larry Ellison, Mark Zuckerberg, Elon Musk, Dustin Moskovitz,
Pierre Omidyar, Gordon Moore, Tim Cook, Vinod Khosla, etc, etc.

[https://en.wikipedia.org/wiki/The_Giving_Pledge](https://en.wikipedia.org/wiki/The_Giving_Pledge)

Google has a specific page for its charity efforts in the Bay Area:
[https://www.google.org/local-giving/bay-area/](https://www.google.org/local-
giving/bay-area/)

This only includes purely non-profit activity; it doesn't count how eg.
cellphones, a for-profit industry, have dramatically improved the lives of the
poor.

~~~
pmyjavec
I feel the problem is the fact there is 100 billionaires in he first place, no
one gets rich on their own. Gates et al, are clever, but didn't get where they
are totally independently without others support, so they should give back.

Also, some of these billionaires are running companies that are great at tax
avoidance, probably most of them. Now what? They get to pick and choose where
they get to spend there/invest money? I don't buy it.

I believe in wealth, just not this radical wealth separation .

~~~
apsec112
Countries that have no rich people are never prosperous. You can raise
marginal income tax rates from, say, 60% to 70%, and maybe that's a good idea
overall, but it doesn't get rid of billionaires. High-tax Sweden has as many
billionaires per capita as the US does:
[https://en.wikipedia.org/wiki/List_of_Swedes_by_net_worth](https://en.wikipedia.org/wiki/List_of_Swedes_by_net_worth)

If you raise the marginal tax rate to 99%, then you get rid of billionaires,
but you also kill your economy. There are all the failures of communist
countries, of course, but even the UK tried this during the 60s and 70s. The
government went bankrupt and had to be bailed out by the IMF. Inflation peaked
at 27%, unemployment was through the roof, etc.:

[https://en.wikipedia.org/wiki/1976_IMF_Crisis](https://en.wikipedia.org/wiki/1976_IMF_Crisis)

[https://en.wikipedia.org/wiki/Winter_of_Discontent](https://en.wikipedia.org/wiki/Winter_of_Discontent)

~~~
mljoe
I don't think an income tax that punishes people for making too much money is
the right way to go about it. How about instead of punishing people for being
rich, discourage the filthy rich from spending money on the frivolous. For
instance, set up a luxury tax on expensive cars, private jets and jet fuel,
first class transportation and primary residences and hotels that are way
above the average value for an area. On the other end, have tax credits (not
just a tax deduction) for contributing to charitable causes, or for taking
business risks that drive innovation.

~~~
vosper
I think it might be great to encourage the rich to spend as much as possible.
Don't the expensive cars, private jets, and first class transportation support
whole networks of businesses, and provide employment?

~~~
simonh
Yes, but you also need to look at the products of those people's labor and
other things that labor could be used for. Do we need more people building and
crewing luxury yachts, or building and operating hospitals and sheltered
accommodation? In both cases people are paid to do work, but the products of
that work are very different.

But in practice much of the wealth of super-wealthy people is actually either
tide up in the value of the businesses that they own, which are often doing
economically valuable things, or is invested in useful enterprises (shares),
or funds useful activities (bonds). It's not as though the net wealth of
Warren Buffet is all being thrown on hookers and blow.

There are already ways to direct the spending of the wealthy towards more
productive uses, such as consumption taxes on luxury goods. But if they take
their wealth to other countries with laxer consumption taxes, there's no a lot
we can do about it. So we're back to the libertarian argument. At some point
you get into questions of freedom and individual rights.

~~~
Nevermark
The problem is not that their are rich people buying nice things.

The problem is when poor or middle class people are unable to improve their
situation or lose ground.

The only time rich people are a problem for poor people is when rich people
are able to corrupt government to tilt the playing field their way. This is a
problem of corrupt politicians and lack of anti-corruption law.

I think people underestimate how many economic difficulties are not caused by
economic effects, but by corrupt politicians who are permitted to stack the
deck against the average person as a way to fund their campaigns or rack up
post-governance favors.

------
apsec112
This article explicitly endorses argument ad hominem:

"These people are wearing funny robes and beads, they live in a remote
compound, and they speak in unison in a really creepy way. Even though their
arguments are irrefutable, everything in your experience tells you you're
dealing with a cult. Of course, they have a brilliant argument for why you
should ignore those instincts, but that's the inside view talking. The outside
view doesn't care about content, it sees the form and the context, and it
doesn't look good."

The problem with argument ad hominem isn't that it never works. It often leads
to the correct conclusion, as in the cult case. But the cases where it doesn't
work can be really, really important. 99.9% of 26-year-olds working random
jobs inventing theories about time travel are cranks, but if the rule you use
is "if they look like a crank, ignore everything they say", then you miss
special relativity (and later general relativity).

~~~
maxerickson
Einstein didn't look like a crank though. His papers are relatively short and
are coherent, he either already had a PHD in physics or was associated with an
advisor (I didn't find a good timeline; he was awarded the PHD in the same
year he published his 4 big papers).

Cranks lack formal education and spew forth the gobbledygook in reams.

~~~
rntz
By this measure, I would say Bostrom is not a crank. Yudkowsky is less clear.
I'd say no, but I'd understand if Yudkowsky trips some folks' crank detectors.

~~~
maxerickson
Einstein's paper on the photoelectric effect is a bit less than 7000 words.

It is part of the foundation of quantum mechanics.

 _Superintelligence: Paths, Dangers, Strategies_ is in the range of 100,000
words (348 pages * roughly 300 words per page).

I'm not familiar with it, but looking around it isn't even clear if it even
lays out any sort of concrete theory.

~~~
rntz
1\. Plenty of academics write books. 2\. Comparing a paper and a book for
length is obviously unfair. Bostrom has also written papers:
[https://en.wikipedia.org/wiki/Nick_Bostrom#Journal_articles_...](https://en.wikipedia.org/wiki/Nick_Bostrom#Journal_articles_.28selected.29)
3\. "Concrete theory" is vague. Is it a stand-in for "I won't accept any
argument by a philosopher, only physicists need apply"?

~~~
maxerickson
I'm not intentionally trying to snub philosophy. With concrete theory, the
point I was reaching for is that when you look to measure impact you probably
want to point back to a compact articulation of an idea.

The book comparison was probably a cheap shot (on the other side of it,
Einstein didn't need popular interest/approval for his ideas to matter; I
think that is a positive).

I think as much as anything the comparison is worthless because we can look
backwards at Einstein.

~~~
rntz
Sure, that's fair. I think Bostrom is no Einstein. But I maintain that he's no
crank, either. There's a lot of space in the world for people who are neither.

------
apsec112
"Not many people know that Einstein was a burly, muscular fellow. But if
Einstein tried to get a cat in a carrier, and the cat didn't want to go, you
know what would happen to Einstein. He would have to resort to a brute-force
solution that has nothing to do with intelligence, and in that matchup the cat
could do pretty well for itself."

This seems, actually, like a perfect argument going in the other direction.
Every day, millions of people put cats into boxes, despite the cats not being
interested. If you offered to pay a normal, reasonably competent person $1,000
to get a reluctant cat in a box, do you really think they simply would not be
able to do it? Heck, humans manage to keep tigers in zoos, where millions of
people see them every year, with a tiny serious injury rate, even though
tigers are aggressive and predatory by default and can trivially overpower
humans.

~~~
idlewords
I'm not arguing that it's useless to outsmart a cat. I'm disputing the
assumption that being vastly smarter means your opponent is hopelessly
outmatched and at your mercy.

If you're the first human on an island full of tigers, you're not going to end
up as the Tiger King.

~~~
aaachilless
What if you're the first human on an island full of baby tigers? I think most
AI alarmists would argue that this analogy is vastly more appropriate.

~~~
tptacek
The idea here is that the person has to kill all the baby tigers, right?
Because otherwise the end state is the same as the island full of adult
tigers.

~~~
tedunangst
Um, no. You enslave the tigers and harvest their organs to achieve
immortality.

~~~
idlewords
Found the SEAL!

------
kazagistar
They always miss a critical and subtle assumption: that intelligence scales
equal to or faster then the computational complexity of improving that
intelligence.

This is the one assumption I most skeptical of. In my experience, each time
you make a system more clever, you also make it MUCH more complex. Maybe there
is not hard limit on intelligence, but maybe each generation of improved
intelligence actually takes longer to find the next generation, due to the
rapidly ramping difficulty of the problem.

I think people see the exponential-looking growth of technology over human
history, and just kinda interpolate or something.

~~~
tachyonbeam
I think the issue is that once do manage to build an AI that matches human
capabilities in every domain, it will be trivial to exceed human capabilities.
Logic gates can switch millions of times faster than neurons can pulse. The
speed of digital signal also means that artificial brains won't be size-
limited by signal latency in the same way that human brains are. We will be
able to scale them up, optimize the hardware, make them faster, give them more
memory, perfect recall.

Nick Bolstrom keeps going on in his book about the singularity, and about how
once AI can improve itself it will quickly be way beyond us. I think the truth
is that the AI doesn't need to be self-improving at all to vastly exceed human
capabilities. If we can build an AI as smart as we are, then we can probably
build a thousand times as smart too.

~~~
TeMPOraL
There's also another thing. AI may not need to be superhuman, it may be close-
but-not-quite human and yet be more effective than us - simply because we
carry a huge baggage of stuff that a mind we build won't have.

Trust me, if I were to be wired directly to the Internet and had some well-
defined goals, I'd be much more effective at it than any of us here - possibly
any of us here combined. Because as a human, I have to deal with stupid shit
like social considerations, random anxiety attacks, the drive to mate, the
drive of curiosity, etc. _Focus_ is a powerful force.

~~~
jmagoon
What about consciousness or intelligence implies that it would be 'pure' in
the sense that you describe? Wouldn't a fully conscious being have a great
deal of complexity that might render it equivalent to the roommate example?
Couldn't it get offended after crawling the internet and reading that a lot of
people didn't like it very much?

The idea that 'intelligence' is somehow an isolatable and trainable property
ignores all examples of intelligence that currently exist. Intelligence is
complex, multifaceted, and arises primarily as an interdependent phenomena.

~~~
TeMPOraL
It doesn't _ignore_ those examples. The idea pretty much comes from the
definition of intelligence used in AI, which (while still messy at times) is
more precise than common usage of the world.

In particular, _intelligence_ is a powerful optimization process - it's
agent's ability to figure out how to make the world it lives in look more like
it wants. _Values_ on the other hand, describe what the agent wants. Hence the
orthogonality thesis, which is pretty obvious from this definition. 'idlewords
touches on it, but only to try and bash it in a pretty dumb way - the argument
is essentially like saying "2D space doesn't exist, because the only piece of
paper I saw ever had two dots on it, and those dots were on top of each
other".

You could argue that for _evolution_ the orthogonality thesis doesn't hold -
that maybe our intelligence is intertwined with our values. But that's because
evolution is a dynamic system (a very stupid dynamic system). Thus it doesn't
get to explore the whole phase space[0] at will, but follows a trajectory
through it. It _may_ be so that all trajectories starting from the initial
conditions on our planet end up tightly grouped around human-like intelligence
_and_ values. But not being able to randomize your way "out there" doesn't
make the phase space itself disappear, nor does it imply that it is
inaccessible _for us_ now.

\--

[0] -
[https://en.wikipedia.org/wiki/Phase_space](https://en.wikipedia.org/wiki/Phase_space)

------
kobayashi
I can't disagree enough. Having recently read Superintelligence, I can say
that most of the quotes taken from Bostrom's work were disingenuously cherry-
picked to suit this author's argument. S/he did not write in good faith. To
build a straw man out of Bostrom's theses completely undercuts the purpose of
this counterpoint. If you haven't yet read Superintelligence or this article,
turn back now. Read Superintelligence, _then_ this article. It'll quickly
become clear to you how wrongheaded this article is.

~~~
mundo
People will stop downvoting this if you edit it to add a specific example or
three.

~~~
kobayashi
Too late to edit, so I'll post just a few examples here:

>The only way out of this mess is to design a moral fixed point, so that even
through thousands and thousands of cycles of self-improvement the AI's value
system remains stable, and its values are things like 'help people', 'don't
kill anybody', 'listen to what people want'.

Bostrom absolutely did not say that the only way to inhibit a cataclysmic
future for humans post-SAI was to design a "moral fixed point". In fact, many
chapters of the book are dedicated to exploring the possibilities of
ingraining desirable values in an AI, and the many pitfalls in each.

Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on
that quote and how difficult it would be to apply to machine language, as well
as what the quote even means. This author dismissively throws the quote in
without acknowledgement of the tremendous nuance Bostrom applies to this line
of thought. Indeed, this author does that throughout his article - regularly
portraying Bostrom as a man who claimed absolute knowledge of the future of
AI. That couldn't be further from the truth, as Bostrom opens the book with an
explicit acknowledgement that much of the book may very well turn out to be
incorrect, or based on assumptions that may never materialize.

Regarding "The Argument From My Roommate", the author seems to lack complete
and utter awareness of the differences between a machine intelligence and
human intelligence. That a superintelligent AI must have the complex
motivations of the author's roommate is preposterous. A human is driven by a
complex variety of push and pull factors, many stemming from the evolutionary
biology of humans and our predecessors. A machine intelligence need not share
any of that complexity.

Moreover, Bostrom specifically notes that while most humans may feel there is
a huge gulf between the intellectual capabilities of an idiot and a genius,
these are, in more absolute terms, minor differences. The fact that his
roommate was/is apparently a smart individual likely would not put him
anywhere near the capabilities of a superintelligent AI.

To me, this is the smoking gun. I find it completely unbelievable that anyone
who read Superintelligence could possibly assert "The Argument From My
Roommate" with a straight face, and thus, I highly doubt that the author
actually read the book which he attacks so gratuitously.

~~~
jmagoon
Well, the thing is there is no such thing as 'machine intelligence', so it's
all just an assumption on top of an assumption about a thing we don't have a
very good grasp of yet.

You're essentially saying that the author is wrong for saying the
philosopher's stone can't transmute 100 bars of iron to 100 bars of gold,
because a philosopher's stone could absolutely do that type of thing, because
that's what philosopher's stones do.

To walk down arguing the merits of this position, why must a machine
intelligence 'not share any of that complexity' of a human intelligence? What
suggests that intelligence is able to arise _absent_ of complexity? Isn't the
only current example of machine intelligence we currently have a property of
feeding massive amounts of complex information into a program that gradually
adjusts itself to its newly discovered outside world? Or are you suggesting
that you could feed singular types of information to something that would then
be classified as intelligent?

~~~
kobayashi
I did not say that the a machine intelligence mustn't share motivational
complexity a la humans. I said that a SAI _need not_ share such complexity.
Those are two very different statements.

And to understand how/why a machine intelligence could arise without being
substantially similar to a human intelligence and sharing similar motivations,
well, I suggest you read the book or similar articles. In short, just because
humans are the most sophisticated intelligences of which we yet know, it would
be a very callous and unsubstantiated leap to believe that a machine
intelligence is likely to share similar traits with humankind's intelligence.
If this is unclear to you, I recommend you learn about how computer programs
currently work, and how they're likely to improve to the point of becoming
superintelligent.

By the way, there are many types of SAI, for example an SAI who's
superintelligent portion relates to speed, or strategy, or a few other types.

------
apsec112
"The assumption that any intelligent agent will want to recursively self-
improve, let alone conquer the galaxy, to better achieve its goals makes
unwarranted assumptions about the nature of motivation."

This isn't just an unreflective assumption. The argument is laid out in much
more detail in "The Basic AI Drives" (Omohundro 2008,
[https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)),
which is expanded on in a 2012 paper
([http://www.nickbostrom.com/superintelligentwill.pdf](http://www.nickbostrom.com/superintelligentwill.pdf)).

~~~
tlb
Certainly the assumption that _every_ intelligent agent will want to
recursively self-improve is unwarranted.

But it only takes _one_ intelligent agent that wants to self-improve for the
scary thing to happen.

~~~
trishume
Why wouldn't it if it is able to? It doesn't have to "want" to self-improve,
it only has to want anything that it could do better if it was smarter. All it
needs is the ability, the lack of an overwhelming reason not to, and a basic
architecture of optimizing towards a goal.

If you knew an asteroid would hit the earth 1 year from now, and you had the
ability to push a button and become 100,000x smarter, I would hope your values
would lead you to push the button because it gives you the best chance of
saving the world.

~~~
SilasX
Because there are tradeoffs. Whatever it's goal is, some of those "drives"
(instrumental values) will be more effective for reaching that goal over the
timespan that it cares about.

Sometimes "accumulating more resources" is the most effective way. Sometimes
"better understanding what problem I'm trying to solve" is the most effective
way". Sometimes "resisting attempts to subvert my mind" is the most effective
way. And yes, sometimes "becoming better at general problem solving" (self
improvement of one's intelligence) is the most effective way.

But there's no guarantee that any one of those will be a relevant bottleneck
in any particular domain, so there's no guarantee an agent will pursue all of
those drives.

~~~
trishume
Agreed. But if the goal is something like "build the largest number of
paperclips", recursive self improvement is going to be a phenomenally good way
to achieve that, unless it is already intelligent enough to be able to tile
the universe with paperclips. Either way we don't care if it self improved or
not, that's just the seemingly most likely path, we just care if it is
overwhelmingly more powerful than us.

The only thing that stops me from recursively self improving is that I'm not
able to. If I could it would be a fantastic way to do good things that as an
altruistic human I want to do. Like averting crises (climate change, nuclear
war), minimizing poverty and misery, etc...

~~~
Applejinx
'build the largest number of paperclips' is a nihilistic, unintelligent goal.

~~~
Retra
So is "die so I can be with my god", but plenty of people believe that is a
morally correct course of action.

------
Animats
Nearer term risks:

\- AI as management. Already, there is at least one hedge fund with an AI on
the board, with a vote on investments.[1] At the bottom end, there are systems
which act as low-level managers and order people around. That's how Uber
works. A fundamental problem with management is that communication is slow and
managers are bandwidth-limited. Computers don't have that problem. Even a
mediocre AI as a manager might win on speed and coordination. How long until
an AI-run company dominates an industry?

\- Related to this is "machines should think, people should work." Watch this
video of an Amazon fulfillment center.[2] All the thinking is done by
computers. The humans are just hands.

[1] [http://www.businessinsider.com/vital-named-to-
board-2014-5](http://www.businessinsider.com/vital-named-to-board-2014-5) [2]
[https://vimeo.com/113374910](https://vimeo.com/113374910)

~~~
visarga
> The humans are just hands.

Not for long. Robots will be cheaper soon.

> All the thinking is done by computers.

It's hard for humans to operate on more than 7 objects at the same time - a
limitation of working memory. So naturally there are simple management and
planning tasks that benefit from computers ability to track more objects.

------
beloch
It's one thing to worry about AI's taking over the world _someday_. It's quite
another matter entirely to think about current military automation of WMD
deployment.

Everyone's probably seen Dr. Strangelove at some point in time. If you _haven
't_, stop reading _immediately_ and go watch it. You will not regret this.
Those who have watched it are familiar with a contrived, hilarious, but mostly
plausible, scheme by which human beings could be fooled into launching an
unauthorized nuclear first strike. This is with technology from half a century
ago. As you watch this movie, you will be exposed to a system with checks and
safeties that can be bypassed by a determined (and insane) individual. Many
humans at every step of the process could have stopped the deployment, but
choose to blindly follow orders, well, like _machines_.

What people _should_ be worried about today is how many humans stand between a
decision made by a nuclear power's leader and launch. Humans doubt. Humans
blink. Humans flinch. When all the data says nuclear missiles are inbound and
it's time to retaliate, humans can still say "No.", and have[1]. If you
automate humans out of the system, you wind up reducing the running length of
a Dr. Strangelove remake. I suspect it would be down to under five minutes
today.

Thanks to popular media, we have this strange idea that taking humans out of
the equation in automated weapon systems reduces the possibility for error.
Individual humans can, and do, make mistakes. This is true. However, humans
fix each others' mistakes in any collaborative process. Machines, on the other
hand, only amplify the mistakes of the original user. If a bad leader makes a
bad decision with a highly automated nuclear arsenal at his or her disposal,
how many other humans will have the chance to scrutinize that decision before
machines enact it?

[1][https://en.wikipedia.org/wiki/Stanislav_Petrov](https://en.wikipedia.org/wiki/Stanislav_Petrov)

~~~
erikpukinskis
Nuclear strike isn't particularly bad. Hiroshima + Nagasaki was 200k
casualties or so. That many people are killed _every year_ by sugar. Traffic
kills over a million.

Any kind of worrying about theoretical future death is silly, especially
accidental death like the kind you are asking us to devote resources to. Why
should I care about theoretical future death when there is real, actual death
happening all around me?

Heck, if you want to worry about future death, why not worry about climate
change deaths, which we can actually model? At least that gives us some chance
to do something about it.

Of course I'm assuming you actually care about public health. Maybe you are
just worried about your own health, and nuclear strike is relevant to you
because it has the potential to pierce through the barriers that distance you
from people who are actually facing mortal peril.

~~~
peterwoo
You should care about "theoretical" future death because, via the inexorable
passage of time, the future becomes the present. At which time you have
"actual" death happening all around you.

Let's try and stop catastrophic climate change, diabetes, and
traffic/pollution related deaths, and also allow some people to worry about
nuclear holocaust as well.

------
jimmcslim
'In particular, there's no physical law that puts a cap on intelligence at the
level of human beings.'

Maybe not, but there are definitely very physical laws governing everything
else, that a superintelligent being's ambitions would run into.

A superintelligent being isn't going to be able to build a superliminal
reactionless drive if the laws of the universe say it isn't possible.

More relevant, a superintelligent being isn't going to be able to enslave us
all with robots if the laws of chemistry don't permit a quantum leap in
battery chemistry.

~~~
idlewords
The place where AI alarmists seem to forget this most often is in
computational complexity, and particularly in the power of super-hyper-
computers to simulate reality. Bostrom in particular doesn't seem to
appreciate the constraints complexity theory puts on what an AI could
calculate (unless he thinks P=NP).

~~~
eemax
Here's a response about why complexity theory may not constrain AI that much:
[https://www.gwern.net/Complexity%20vs%20AI](https://www.gwern.net/Complexity%20vs%20AI)

It is something "AI alarmists" think about.

~~~
mrfusion
What's his reason? I got halfway through that and realized I didn't
understand.

~~~
paulbaumgart
That even if you see diminishing returns to what you can accomplish with
greater intelligence, a small advantage will still compound into a large one
over the theoretically infinite lifespan of an AI.

~~~
swolchok
> the theoretically infinite lifespan of an AI

Infinite? Why isn't AI subject to the heat death of the universe?

~~~
paulbaumgart
It just might be smart enough to figure out a way around that.

~~~
fjdlwlv
Naturally , Asimov already wrote about this. _The Last Question_ and _The Last
Answer_

Really,Asimov is more on point and more fun to read than all these
Superintenlligence pundits

~~~
erikpukinskis
Ooh thanks for this. I've read THQ at least once before, but am really
enjoying reading it again. I really liked this bit, emphasis mine as it
relates to our current spectating of a dying ecosystem:

"Unhappily, Zee Prime began collecting interstellar hydrogen out of which to
build a small star of his own. _If the stars must someday die, at least some
could yet be built._ "

------
Afforess
> _Observe that in these scenarios the AIs are evil by default, just like a
> plant on an alien planet would probably be poisonous by default._

I believe this is a core misunderstanding. Bostrom never says that a
superintelligent AI is evil by default. Bostrom argues the AI will be
orthogonal, it's goals will be underspecified in such a way that leads it to
destroy humanity. The paperclip optimizer AI doesn't want to kill people, it
just doesn't notice them, the same way you don't notice ants you drive over in
your daily commute. AIs with goals orthogonal to our own will attack humanity
in the same way humanity attacks the rainforests, piecemeal, as-needed, and
without remorse or care for what was there before. It won't be evil, it will
be uncaring, and blind.

~~~
monk_e_boy
An uncaring blind AI wouldn't be very interesting, I would assume whoever had
turned it on and was observing it would just turn it off and tweek the
algorithm.

But I don't think this is even possible as when the AI was in child stage and
was learning, it would learn from people, so it would become like people. Or
at least understand them. At some point it has to know less than a human and
will learn at a rate that humans can measure. As we measure we can make
decisions about it.

I don't agree with the assumption that a factory making paperclips will make
the transition to a super intelligent AI in a short time frame. I think it
will take years. And along that route (of years of learning) we'll have time
to talk to it and decide if it should be kept switched on.

~~~
lmm
> I don't agree with the assumption that a factory making paperclips will make
> the transition to a super intelligent AI in a short time frame.

Why? Because factories run by humans take a long time to transition? It might
take weeks or months to make a process tweak in a human-run factory, but
deploying new code happens in seconds or faster.

~~~
monk_e_boy
Yes because the deplyment of the code is the hard stage. _rolls eyes_

How long would it take to write -- scientists are going through this stage
right now and it's taking about 50 years so far. You think a paperclip factory
is going to be quicker? Nope. And any smart AI that is being used to design a
better AI will take years and will be front page news (assuming non-military)
just like any other tech company (funding, profit, getting good hires)

This assumption that the factory is smart in secret and buys more servers in
secret and creates code somewhere in secret is just so wrong. It will be dog
or dolphin smart at some point and will ask for things, it'll communicate with
us.

~~~
lmm
> How long would it take to write -- scientists are going through this stage
> right now and it's taking about 50 years so far. You think a paperclip
> factory is going to be quicker? Nope. And any smart AI that is being used to
> design a better AI will take years and will be front page news (assuming
> non-military) just like any other tech company (funding, profit, getting
> good hires)

It may take years or decades, but the point is that once it reaches the point
that it can make itself smarter, it'd go from "barely smarter than a human" to
"inconceivably smart" in a matter of milliseconds.

> This assumption that the factory is smart in secret and buys more servers in
> secret and creates code somewhere in secret is just so wrong. It will be dog
> or dolphin smart at some point and will ask for things, it'll communicate
> with us.

Dogs and dolphins don't communicate with us very effectively. I think it's
very plausible that the AI could get to beyond-human-level smart without ever
spontaneously deciding to talk to us. Are you sure there's a region where it's
smart enough to talk but not smart enough to hide how smart it is?

~~~
monk_e_boy
> it'd go from "barely smarter than a human" to "inconceivably smart" in a
> matter of milliseconds

Incorrect.

It will be interesting to come back to this thread in 50 years time and see
who is right.

------
md224
"Put another way, this is the premise that the mind arises out of ordinary
physics... If you are very religious, you might believe that a brain is not
possible without a soul. But for most of us, this is an easy premise to
accept."

The thing that irks me about this is how it reinforces a common (and in my
opinion, false) dichotomy: either you believe the mind is explicable in terms
of ordinary physics or you believe in a soul and are therefore religious. I
feel like there should be a third way, one that admits something vital is
missing from the physicalist picture but doesn't make up a story about what
that thing is. There is a huge question mark at the heart of neuroscience --
the famed Explanatory Gap -- and I think we should be able to recognize that
question mark without being labeled a Supernaturalist. Consciousness is weird!

~~~
JamilD
The roundworm (c. elegans) only has 302 neurons, and about 7,000 synapses, but
is capable of social behavior, movement, and reproduction. The entire
connectome has been mapped, and we understand how many of these behaviors work
without having to resort to additional ontological entities like your "third
way".

If this complex behavior can be explained using only 302 neurons, I have no
doubt that the complexity of human behavior and consciousness can be explained
using 100,000,000,000 neurons.

~~~
lucker
Behavior? Yes. Consciousness? Maybe not. As far as I know, no-one has ever
come up with an explanation of how matter can give rise to consciousness.
Obviously consciousness is affected by changes in matter — if I take certain
drugs, I feel different, etc. — but there is no explanation at all for what
mechanisms may give rise to consciousness in the first place, and the very
idea of the material giving rise to consciousness might actually not make
sense.

~~~
snowwrestler
This line of thinking begs the question of whether consciousness is some
special thing in nature. You only have to look for an explanation if you think
there is something to explain.

If I start from the assumption that I am mistaken about what I think
consciousness is--that maybe it doesn't exist at all the way I think it does--
then I don't have to worry about how matter gives rise to it. I can focus
instead on trying to understand where my definition went wrong.

Humans have never lacked for opinions or explanations about what natural
things are, or how they got that way. But in the practice of science, these
must yield before empirical evidence.

As you note, there is a huge amount of evidence that consciousness is
physical. But _there is no objective evidence that I experience consciousness
as you define it._ You just have to take my word for it--and so do I. But
maybe I'm wrong.

------
kobayashi
>The only way out of this mess is to design a moral fixed point, so that even
through thousands and thousands of cycles of self-improvement the AI's value
system remains stable, and its values are things like 'help people', 'don't
kill anybody', 'listen to what people want'.

Bostrom absolutely did not say that the only way to inhibit a cataclysmic
future for humans post-SAI was to design a "moral fixed point". In fact, many
chapters of the book are dedicated to exploring the possibilities of
ingraining desirable values in an AI, and the many pitfalls in each.

Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on
that quote and how difficult it would be to apply to machine language, as well
as what the quote even means. This author dismissively throws the quote in
without acknowledgement of the tremendous nuance Bostrom applies to this line
of thought. Indeed, this author does that throughout his article - regularly
portraying Bostrom as a man who claimed absolute knowledge of the future of
AI. That couldn't be further from the truth, as Bostrom opens the book with an
explicit acknowledgement that much of the book may very well turn out to be
incorrect, or based on assumptions that may never materialize.

Regarding "The Argument From My Roommate", the author seems to lack complete
and utter awareness of the differences between a machine intelligence and
human intelligence. That a superintelligent AI must have the complex
motivations of the author's roommate is preposterous. A human is driven by a
complex variety of push and pull factors, many stemming from the evolutionary
biology of humans and our predecessors. A machine intelligence need not share
any of that complexity.

Moreover, Bostrom specifically notes that while most humans may feel there is
a huge gulf between the intellectual capabilities of an idiot and a genius,
these are, in more absolute terms, minor differences. The fact that his
roommate was/is apparently a smart individual likely would not put him
anywhere near the capabilities of a superintelligent AI.

To me, this is the smoking gun. I find it completely unbelievable that anyone
who read Superintelligence could possibly assert "The Argument From My
Roommate" with a straight face, and thus, I highly doubt that the author
actually read the book which he attacks so gratuitously.

------
pkinsky
Not that I take the whole Bostrom superintelligence argument too seriously,
but this is an incredibly weak argument (or more accurately, bundle of barely-
related arguments thrown at a wall in the hope that some stick) against it.
Feel free to skip the long digression about how nerds who think technology can
make meaningful changes in a relatively short amount of time are presumptuous
megalomaniacs whose ideas can safely be dismissed without consideration, it's
nothing that hasn't been said before.

------
rl3
The notion that near-term AI concerns and existential AI concerns somehow
represent a binary option that we must choose between is fallacious at best.

Near-term AI concerns represent a massive challenge encompassing many ethical
and social issues. They must be addressed.

Existential AI concerns, while low probability, have consequences so dire that
they warrant further research regardless. These too must be addressed.

There is ample funding and human resources to work on both problems
effectively. Why fight about it?

~~~
timelincoln
I think its important to regulate the potential runaway effect of these
ideologies that satisfy the religious instincts of groups

------
wyager
> What kind of person does sincerely believing this stuff turn you into? The
> answer is not pretty.

This is a particularly stupid version of
[https://en.wikipedia.org/wiki/Appeal_to_consequences](https://en.wikipedia.org/wiki/Appeal_to_consequences)

"If you don't agree with me, you'll be associated with these people I'm
lambasting!" I was surprised to see something so easily refutable used to
conclude the argument; the article started out fairly strong.

> If you're persuaded by AI risk, you have to adopt an entire basket of
> deplorable beliefs that go with it.

Well if they're "deplorable", they must be false! QED.

------
grandalf
What about the counter-argument from domestic canines:

More likely, artificial intelligence would evolve in much the same way that
domestic canines have evolved -- they learn to sense human emotion and to be
generally helpful, but the value of a dog goes down drastically if it acts in
a remotely antisocial way toward humans, even if doing so was attributable to
the whims of some highly intelligent homunculus.

We've in effect selected for certain empathic traits and not general purpose
problem solving.

Pets are not so much symbiotic as they are parasitic, exploiting the human
need to nurture things, and hijacking nurture units from baby humans to the
point where some humans are content enough with a pet that they do not
reproduce.

I could see future AIs acting this way. Perhaps you text it and it replies
with the right combination of flirtation and empathy to make you avoid going
out to socialize with real humans. Perhaps it massages your muscles so well
that human touch feels unnecessary or even foreign.

Those are the vectors for rapid AI reproduction... they exploit our emotional
systems and only require the ability to anticipate our lower-order cognitive
functioning.

If anything, an AI would need to mimic intellectual parity with a human in
order to create empathy. It would not feel good to consult an AI about a
problem and have it scoff at the crudeness of your approach to a solution.

Even if we tasked an AI with assisting us with life-optimization strategies,
how will the AI know what level of ambition is appropriate? Is a promotion
good news? Or should it have been a double promotion? Was the conversation
with friends a waste of time? Suddenly the AI starts to seem like little more
than Eliza, creating and reinforcing circular paths of reasoning that mean
little.

But think of the undeniable joy that a dog expresses when it has missed us and
we arrive home... the softness of its fur and the genuineness of its pleasure
in our company. That is what humans want and so I think the future Siri will
likely make me feel pleased when I first pick up my phone in the morning in
the same way. She'll be there cheering me on and making me feel needed and
full of love.

~~~
nojvek
As a dog owner I really agree with this. I am also building a little rasberry
pi robot capable of being a companion for my dog when I'm at work.

The idea is if I have a moving agent being able to entertain my dog and keep
him occupied, we just need to extrapolate it and imagine robot companions.

Securing love, sex and companionship is hard. with the advent of tinder and
likes, women can have 1000's of options at their fingertips while men gotta
settle for what they can find. The insane traffic on porn sites suggest that
artificially satisfying this need is something we will easily accept. imagine
westworld like robots created for human pleasure.

We have this innate needs and desires. Something that ad companies probe and
thrive upon.

Google and Facebook are becoming a hoarder of collosal amounts of data,
capital and AI talent in the hopes to shove more ads. To satisfy their ever
growing share price and do more with less humans. It's a very scary scenario
already. Because it's free, all our data belongs to them to endlessly learn
about our behaviours and shove ad bait in our face.

------
lern_too_spel
Pretty poorly argued. The AI alarmists simply argue that if the super-
intelligence's objective isn't defined correctly, the super-intelligence will
wipe us out as a mere consequence of pursuing its objective, not that the
super-intelligence will try to conquer us in a specific way like Einstein
putting his cat in a cage. The alarmists' argument is analogous to humans
wiping out ecosystems and species by merely doing what humans do and not by
consciously trying to achieve that destruction. Many of the author's arguments
stem from this fundamental mistake.

------
narrator
"The second premise is that the brain is an ordinary configuration of matter,
albeit an extraordinarily complicated one. If we knew enough, and had the
technology, we could exactly copy its structure and emulate its behavior with
electronic components, just like we can simulate very basic neural anatomy
today."

We could have a computer program that perfectly simulates the brain, but has
some nasty O(2^N) complexity algorithm parts that are carried out in constant
time by physical processes such as protein folding. Thus, in theory we could
simulate a brain inside of a computer but the program would never get
anywhere, even assuming Moore's law would continue indefinitely.

~~~
AnimalMuppet
I don't buy the AI quasi-religious stuff. But your argument here is flawed. If
protein folding can do the process in constant time, we may be able to find
another process (but electronic rather than wet chem) that can also do it in
constant time.

~~~
narrator
Being able to find constant time algorithms for algorithms that currently take
exponential time is not at all assured.

~~~
trishume
It is to some extent if we have a constant time example in real life. If the
AI can't solve protein folding fast enough it can just design absurdly fast
protein sequencers and really good microscopes and get proteins to fold
themselves in real life and use the results in the rest of the computation.

------
mrfusion
I've always wondered if the problems an intelligence solves are exponentially
hard so even if we build a super intelligence it wouldn't be all that much
smarter than we are.

For example compare how many more cities in the traveling salesman problem a
super computer can solve vs your grandmas pc. It's more but surprisingly not
all that many more.

What do you think of that idea?

~~~
idlewords
I think this basic concept of intractability, which programmers are very
familiar with, hasn't penetrated far enough into AI world.

Bostrom and Yudkowsky in particular seem happy to hand-wave past computational
complexity.

~~~
Eliezer
I'm getting kind of sick of people who haven't read 2% of the stuff imagining
what they think we've never talked about over the last 16 years.

[https://intelligence.org/files/IEM.pdf](https://intelligence.org/files/IEM.pdf)

~~~
idlewords
Reading 2% of your stuff is like reading 400% of my stuff, and I'm verbose as
hell.

Edit.

~~~
ahh
First off, that's a blatant ad hominem.

Moreover, if your best argument against the guy you're claiming is defrauding
everyone is "I can't be bothered to read his work"...

~~~
idlewords
I've read Eliezer's stuff, which is what gives my request its special
poignancy.

------
ikeboy
If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs
being from aliens, and you didn't know a lot about UFOs and affiliated cults,
what would be the smart thing to believe?

Also,

>AI alarmists believe in something called the Orthogonality Thesis. This says
that even very complex beings can have simple motivations, like the paper-clip
maximizer.

Uh, no. The point of the paper clip maximizer is that it's orthogonal, not
that it's simple.

>It's very likely that the scary "paper clip maximizer" would spend all of its
time writing poems about paper clips, or getting into flame wars on
reddit/r/paperclip, rather than trying to destroy the universe.

You know what can be made into poems about paper clips? Humans. You know what
can have better flame wars than humans? Our atoms, rearranged into the ideal
paper clip flame war warrior.

>The assumption that any intelligent agent will want to recursively self-
improve

That's not really a premise. A better version would be "a likely path to
super-intelligence will be a self-improving agent".

>It's like if those Alamogordo scientists had decided to completely focus on
whether they were going to blow up the atmosphere, and forgot that they were
also making nuclear weapons, and had to figure out how to cope with that.

Yudkowsky has argued that more should be invested in research into AI risk.
There are tens of billions of dollars being spent on AI R&D, and somewhere in
the tens of millions range spent on AI risk research. Even if advocates wanted
us to spend hundreds of millions of dollars on risk research a year , that
wouldn't make this criticism fair. You have a point that we shouldn't be
ignoring other more important things for this, but to argue against increasing
spending from 8 figures to 9 figures you need better arguments.

~~~
daveguy
> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in
> UFOs being from aliens, and you didn't know a lot about UFOs and affiliated
> cults, what would be the smart thing to believe?

That Elon Musk, Bill Gates and Stephen Hawking etc are all a little nutters
when it comes to UFOs and aliens?

I don't know a lot about UFOs and affiliated cults, but I'm going to guess
that those mentioned are not UFO experts just like they aren't machine
learning experts.

> That's not really a premise. A better version would be "a likely path to
> super-intelligence will be a self-improving agent".

Excellent point, but that doesn't give a self-improving agent the ability to
ignore computational complexity or the uncertainty of chaotic systems.

~~~
bambax
> _That Elon Musk, Bill Gates and Stephen Hawking etc are all a little nutters
> when it comes to UFOs and aliens?_

Yes! Elon Musk believes firmly that we're living in a simulation, and that
doesn't make me believe in that theory more, it simply makes me admire Musk
less.

Just because someone is or has been extremely successful doesn't mean they're
right about everything. Many successful and intelligent people have been very
religious: that's a testament to the human mind's complexity and frailty, not
to the existence of God...

Madeleine Albright is a strong advocate of Herbalife: that doesn't change my
opinion of Herbalife but it does change my opinion of Albright.

~~~
ikeboy
That's one. If as many people who have spoken out about AI risk would also
"firmly believe we're in a simulation", it would shift my views (I would
expect there to be some evidence or very strong arguments for it).

------
jzwinck
> an artificial intelligence would also initially not be embodied, it would be
> sitting on a server somewhere, lacking agency in the world. It would have to
> talk to people to get what it wants.

Organized crime and semi-organized criminal gangs stand to establish a highly
effective symbiosis with amoral machines which "lack agency."

If a machine wants to kill someone, all it needs to do is find a person to
carry out the task in exchange for some benefit such as cash or blackmail ("I
will report you to the police for helping me steal the electricity I need,
plus the small trafficking operation I helped you optimize last summer").

Two arms and two legs are not what make modern criminals scary. It is their
ability to plan, optimize and repeat sophisticated operations. Soon there will
be an app for that.

~~~
fjdlwlv
But why ever would a machine want to kill someone? Only if someone turned the
machine on to some reason. That person has agency . And you don't nee
SuSuperintelligence in order for a drone to bomb the wrong person, or for a
computer virus to destroy the internet, or a biochemical virus, or any other
self replicating disaster.

------
blakeweb
For those interested in another very different analysis of the evidence for
and against superintelligence being a worthwhile concern, here's Holden
Karnofsky, executive director of GiveWell and the related Open Philanthropy
Project:

\- Up until 2016, Holden held the opinion that it was not a worthwhile
concern. He describes why he changed his mind here:
[http://www.openphilanthropy.org/blog/three-key-issues-ive-
ch...](http://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-
mind-
about#Changing_my_mind_about_potential_risks_from_advanced_artificial_intelligence)

\- And he outlines his current arguments in favor of treating it as a
significant concern here: [http://www.openphilanthropy.org/blog/potential-
risks-advance...](http://www.openphilanthropy.org/blog/potential-risks-
advanced-artificial-intelligence-philanthropic-opportunity)

In that lengthy write-up, he addresses both the shorter-term risks from things
like misuse of non-superintelligent AI, along with longer-term risks from
superintelligent AI.

~~~
mcguire
What does he suggest doing about it?

That's always my question for this sort of topic. It seems to me that the only
reasonable behavior for someone accepting SAI as a legitimate concern is to
advocate defunding all AI researchers, as well as criminal punishments for AI
research.

~~~
blakeweb
There's a section on Tractability--he can speak to his opinions better than I
can: [http://www.openphilanthropy.org/blog/potential-risks-
advance...](http://www.openphilanthropy.org/blog/potential-risks-advanced-
artificial-intelligence-philanthropic-opportunity#Tractability)

TL;DR: He talks about several avenues of technical and strategy research that
seem plausibly very useful and are not currently being pursued by more than a
handful of people in the world. Many of these currently-engaged people are
precisely the folks the author of this post disparages for being weird or
insular.

One of the avenues of technical research he mentions is "transparency,
understandability, and robustness against or at least detection of large
changes in input distribution" for AI/ML systems. In other words, technical
research to produce methods capable of reducing the likelihood of advanced
systems behaving in severely bad, unexpected ways.

------
brw12
So many people don't realize that as social creatures, we see the world
through the lens of our social brains. Computers don't care about "all
powerful" or "savant" or "should". They don't qualitatively distinguish
between killing one person and every person. Self-replication plus
proliferation of cheap components plus proliferation of AI algorithms equals a
time when a script kiddie or a bug can mean every last fragile sack of meat
and water gets punctured or irradiated or whatever. If it can be done in a
video game with good physics simulation, it can be done eventually in real
life. It won't be like a movie where the ticking time bomb works on a human
timescale and always has a humanlike weakness. Comparing this to nuclear
weapons is silly. It's more like issuing every person in the world a "kill or
help between 0 and 7 billion people" button that's glitchy and spinning up
1,000 4chan chains with advice on tinkering with it.

------
amasad
It's amusing how his talk and Stuart Russel's talk [0] -- which end up going
opposing ways -- use the "atomic bomb" as an example in fallacious thinking.
Stuart's example is about how respected physicists said "impossible to get
energy out of atoms" and the very next morning someone published a paper
stating how to do it.

[0]:
[https://www.youtube.com/watch?v=zBCOMm_ytwM](https://www.youtube.com/watch?v=zBCOMm_ytwM)

~~~
idlewords
He and I were on a panel once! It did not go well.

~~~
tptacek
And you're holding out on us with the story because...

~~~
idlewords
Upgrade to HN Premium!

------
Practicality
"Hopefully you'll leave this talk a little dumber than you started it, and be
more immune to the seductions of AI that seem to bedevil smarter people. "

Nice.

All in all, effectively reasoned. I've been making similar arguments for the
last few years. AI is likely to create a lot of problems, and solve a lot of
problems. But I think both aspects are messy and our relationship with our
future technology will be complicated and fraught with regular human issues.

Some of those potential issues are very serious, yes, but serious like
automating jobs and not solving the employment issue, or creating a very
effective army of automated drones and single-handedly taking over a country
(or, sure, the world), not issues of AI destroying the planet and/or enslaving
all of mankind.

------
wnoise
> the early 20th century attempt to formalize mathematics and put it on a
> strict logical foundation. That this program ended in disaster for
> mathematical logic is never mentioned.

Is he joking? Yes, it "failed", but in doing so created a wonderful revolution
in mathematical thought, allowing exploring a rich area encompassing model
theory, types, computability, algorithms, efficiency, and more.

------
otakucode
He is completely skipping the most important part. The superintelligence has
to have some _reason_ to be in conflict with us. Human beings don't go out of
their way to hunt down and eliminate ants. They don't find out what ants eat
and sieze control of it to manipulate them. There is no reason to think that a
superintelligent machine would be likely to present terrible interference to
us proposed.

So it's super smart and has its own goals. We can reliably presume that it
will need energy to achieve those goals. Will it need to achieve them quickly?
Why? Would the superintelligence be shortsighted enough to provoke humans into
active combat against it? I see no reason to just assume we know the thing
would have human eradication as a goal, need exorbitant amounts of energy and
resources because it sees achieving its goals as a terribly time-critical
thing to do, etc.

Also, if we're going to build a brain and assume no quantum weirdness - why
assume the total absence of a subjective morality? Why assume completely
immunity to social influence, which the brains we observe most all encounter?

And let's not forget - human beings are in a weird spot, intelligence-wise.
We're smart enough to achieve things, but we're not smart enough to be
crippled by the profound lack of control we have over things. We're totally
comfortable going out and driving around all day, even though we claim to
value human life extremely highly and even though we know that causing deaths
of innocent human beings is not THAT far-fetched of an outcome of driving a
car. I would be unsurprised if we flipped on a super-AI and after 5 minutes it
simply stopped in its tracks, having determined that the probability that some
action it would take would result in its own destruction with a non-zero
probability and that it is instead taking the safe route and not acting at
all. No matter how superintelligent it is, it is not going to be able to
magically compensate for the influences of chaos theory which destroy the
ability to be certain about ANY prediction. We, as humans, feel very clever
with our assumed spherical cows, frictionless surfaces, homogenous air
pressure and zero wind resistance. Why would a superintelligence also be
comfortable with that?

~~~
xendo
>So it's super smart and has its own goals. We can reliably presume that it
will need energy to achieve those goals. Will it need to achieve them quickly?
Why? Would the superintelligence be shortsighted enough to provoke humans into
active combat against it?

It happened so many times throughout human history, that it's really foolish
to reject. People with their own goals burned, raped, murdered. Maybe
superintelligence will build its own civilization, and to suppress remorse
will put people in reservations.

~~~
TheRealDunkirk
You'd think a "super" intelligence would be smart enough to minimize the waste
of resources to fight anyone or anything. You'd think it would find the most
harmonious path to dominance, as it would be the most efficient.

If you wanted to quell a rabble of contrary people, you could, say, control
and direct all the education available to the population, make sure that
reality-escaping drugs were easily obtainable, provide loads of free
entertainment that ran 24x7, make sure that prices of luxury goods (like game
consoles or fashionable clothes or whatever) were affordable for the majority
of the population, allow people to feel important by giving them access to
make comments on various sites on the internet, manipulate all the world's
news with the feedback of what was happening on social media...

Wait, we were talking about some mythical future, right?

------
roscoebeezie
I don't think an AI has to kill people to be super dangerous. What if a social
AI was used by some group to sway/manipulate public opinion on some important
issue? How much manipulation would it take to be dangerous?

------
stcredzero
_In 1945, as American physicists were preparing to test the atomic bomb, it
occurred to someone to ask if such a test could set the atmosphere on
fire...Los Alamos physicists performed the analysis and decided there was a
satisfactory margin of safety._

Exactly how big was this margin? Please someone tell me this was quite a large
margin!

~~~
idlewords
If I remember right, something like a safety factor of 30. So not really as
big as people would like!

More detail in this great summary:
[http://large.stanford.edu/courses/2015/ph241/chung1/](http://large.stanford.edu/courses/2015/ph241/chung1/)

And the Los Alamos paper is really good reading:

[http://www.sciencemadness.org/lanl1_a/lib-www/la-
pubs/003290...](http://www.sciencemadness.org/lanl1_a/lib-www/la-
pubs/00329010.pdf)

~~~
marcosdumay
The summary talks about a safety factor of 1000 on an extremely unreasonable
scenario of 1MeV of average temperature, falling down to 10 in an incredibly
more unreasonable scenario of 10MeV temperature.

I don't know the actual probabilities, but I do really expect a few atoms to
get over 1MeV inside a fission bomb (as the reaction emits ~5MeV of energy).
But they will almost certainly not hit each other. I also don't see how any
atom anywhere can reach 10MeV.

~~~
idlewords
I'm trying to find where I remember the factor of 30 figure from. Maybe it was
the equivalent calculation for hydrogen fusion in the ocean? I'll keep
looking.

~~~
marcosdumay
It's well within the 1000 - 10 range. May be an overall approximation
somewhere on the paper.

But I was just summarizing it. Looks like the kind of calculation one does
when he's entirely convinced it's impossible, but needs to verify anyway. I
liked reading it, and I was surprised they didn't take meteors into account.

------
teekert
_Observe that in these scenarios the AIs are evil by default, just like a
plant on an alien planet would probably be poisonous by default. Without
careful tuning, there 's no reason that an AI's motivations or values would
resemble ours.

For an artificial mind to have anything resembling a human value system, the
argument goes, we have to bake those beliefs into the design._

Because we are so nice to the lesser intelligent creatures of our world? We
don't even understand our own consciousness, surely our suffering is not very
real and we can be used for meat.

------
tim333
>Hopefully you'll leave this talk a little dumber than you started it, and be
more immune to the seductions of AI that seem to bedevil smarter people.

I rather like the smarter people stuff - it's kind of exciting to figure how
AI could solve human problems and stuff like the OpenAI initiative seem a
sensible precaution against things going wrong. The arguments against seem a
little reminiscent of arguments against global warming that say what do all
these 'experts' know? I can look out the window and it's snowing as much as
ever.

~~~
nojvek
It's a commendable initiative but the real value of AI is in its data. I think
about data as memories. The algorithm learns from its memories and creates a
map of abstract concepts and behaviors which it uses to make plans to maximize
some goal.

Currently in the name of free services the big tech Co's collect petabytes of
data about us and learn from it. The data is owned by them.

------
k8t
Let's take a step back and look at the meta-level.

So there's maybe a chance AI is extremely dangerous (as in wipes humanity
out), and there's a chance that AI might not be dangerous. There are arguments
between how likely each choice is. But the more important fact is that since
there's so much at stake (all of humanity), we should likely be really really
really sure about things.

For example, let's say you live in a house and you've got your kids and your
grandkids and all your lovely pets living here. Let's say there's this mystery
device that when you press it, there's a small chance that it could be
extremely bad. You'd want to make sure it's OK right? Right?!

I agree that we probably don't need to devote ALL of our resources to make
sure it's OK, as we can devote some resources to problems that are actually
hurting us right now like diseases, etc. But there are a lot of people in the
AI community who think that it may be dangerous. It is rational that we'd want
to be absolutely sure that it's safe once we AI happens.

 _We should take the approach of being DEFAULT cautious when talking about any
technological breakthrough that have changes that we cannot reverse._

~~~
idlewords
The problem with this precautionary reasoning is it leads you to Pascal's
Mugging, where you are ready to believe very unlikely things because of the
enormous impact they'll have if true.

~~~
k8t
Yes, I agree with the reasoning behind Pascal's Mugging. But Pascal's Mugging
refers to things that have an astronomically low chance of happening — like
0.000000001. But is the chance that AI is dangerous that low? Nobody in the
world knows for sure at this point, due to how far away superAI might be, and
due to the uncertainties in implementation. Therefore, if we use Bayesian
thinking and spread it out, I'm not really sure you could put it below 1% (I
pulled this number out of thin air, but everybody at this point is doing the
same).

~~~
Analemma_
There's no magic probability value at which the Pascal's Mugging argument
suddenly "kicks in". It's all about the utilitarian tradeoff of, "given this
low probability, am I devoting the right amount of resources to preventing
this terrible event? Could they be better spent elsewhere? And is the fretting
harmful _in and of itself_ , wiping out the expected gain?" The talk is
arguing that the answer is yes to both of those latter questions.

------
danm07
I don't believe we should be as dismissive as AI as he suggests. There is an
undertone of populism and a string of weak rebuttals that undermine his
argument.

Here is why AI is reason for concern:

1\. The world is made up of patterns. Proof: mathematics.

2\. Patterns can be distilled down to data.

3\. Machine Learning is highly effective at analysing patterns.

4\. Machine Learning construction itself is a pattern, albeit a very
complicated one, by human scales.

5\. Learning speed is bottlenecked by hardware and data availability. Both of
which are being solved at exponential rates.

The other knock is that consciousness and its constituent pieces is a product
of evolution, which itself is not outside the realm of physics.(I.e. We're
sacks of chemicals) This means it is theoretically possible to reproduce
intelligence if the same initial conditions are met.

As silicon poses much shorter time gaps than biological substrates, and as
equivalent sensory data are fed into simulated environments, the vectors
intercepts are quite clear.

I think the key is whether we believe machine learning will reach a critical
threshold where it will be able to interpret higher-level meaning.

There is positive evidence that this has to do with layers of neural
circuitry, which recent neural net design has confirmed in output.

Additional layers can be added to machines, but it is not so easy to do with
humans.

The danger is in autonomy and might. Autonomy being higher level cognition
mentioned earlier, and might being the ability to bruteforce its way through
problems by sheer speed and iteration, which computer excel at.

It would be great if you guys can spot flaws in this argument. The conclusions
are quite grim at my current outlook. It'll be nice to be proven wrong.

------
pi-squared
I just love to see how my mind is so fallable. I want to believe that many of
things I hold in my head are data, facts and logical theories and forget what
assumptions I have made to get there. Not saying whether the author is right
or not but he brilliantly points out some potential flaws and logical leaps
that must be taken to get there.

------
Koshkin
Intelligence, having been born as a biological defense mechanism, is evil. It
is a weapon. For an intelligent individual, being (or, rather, appearing)
"good", is just another layer of defense - this time against other, likewise
intelligent, individuals. "An armed society is a polite society."

 _Spying_ is a form of intelligent behavior. ("Intelligence.") So is
_stealing_ (and not getting caught, if possible). And, no doubt, hacking [1]
is, too.

Wouldn't absolute intelligence mean absolute evil, then?

[1] [http://www.reuters.com/article/us-cyber-ukraine-
idUSKBN14B0C...](http://www.reuters.com/article/us-cyber-ukraine-
idUSKBN14B0CU)

~~~
toasterlovin
Your skin is also a biological defense mechanism. Is it evil, too?

------
EnFinlay
>Eventually it gets to a near-superhuman level, where it's funnier than any
human being around it.

>>My belt holds up my pants and my pants have belt loops that hold up my belt.

>>What's going on down there?

>>Who is the real hero?

I love that he used a Mitch Hedberg joke for this.

~~~
mirimir
No, no, no. Belt loops hold belts down ;)

------
mdale
The comparison to alchemy kind of puts things on a many generations scale and
one in which fundemtals were far from grasp. Maybe a better comparison would
be to the Wright brothers first flight against the time that it took to land
on the moon. 1903 to 1969.

I do agree with the point per not doing enough around the immediate impact of
AI. The autonomous / electric vehicle transition alone is going rip apart the
global economy; preparing for that transition could help midigate it's
negative impact on certain populations.

------
alouisos
I would love to see more articles like that from people building our current
AI to discuss real problems on current AI systems so we can start working on
solutions. For example human level interpretability as required by law in EU
for AI systems eg targetted ads is a limiting factor to AI progresss because
some of the most advanced currents AIs are not interpretable and maybe
shouldn't because we can now engineer intelligence different from ours but not
necessarily dangerous (or work on making it safe). To my opinion this is a
more pressing matter than a divine future paper clip AI killer. Making an
assumption of a so called "self recursive superAI" and taking it from there is
actually diminishing the power of arguments towards the dangers of AI which is
an important discussion sometimes abused by people that have never actually
built one and extrapolate Gant mind philosophy arguments towards a dangerous
future, which is impressive but avoids any proposal of solution to current
potential AI dangers which should maybe included as part of the arguments.
Important matters as AI safety can be collectively discussed by engineers and
philosophers together based on current state and near future potential and not
only as a sci-fi future of a god like entity that has nothing to do with our
current AI situation. My two cents, hope I am not offending anyone.

------
nickbauman
Moore's Law (on a single CPU core) effectively ended around 2009. Our
programming idioms for parallel processing are still so crude, hardly anyone
can write maintainable parallelized code for anything but toy problems. Until
we can restart Moore's Law again, this is all academic.

[http://www.globalnerdy.com/2007/09/07/multicore-
musings/](http://www.globalnerdy.com/2007/09/07/multicore-musings/)

~~~
grkvlt
But the law says nothing about cores or clock speeds etc. just transistors and
their sizes. The regular reductions in feature size on dies are still
happening, as I understand it.

~~~
nojvek
Size of copper atom is 0.2nm. We are already at 10nm fab processes. We can't
really get too small otherwise it gets unstable and heat becomes a major
issue.

On the positive side, electric flows much faster than synapses and transistors
are smaller than neurons. So with today's technology if we were to pack
silicon parallel cores in size of human brain we'd have a much faster brain.

~~~
hetid
The heat issue makes the latter impossible.

------
return0
For the longest time the conspiracist inside me told me that someone had
already created a monstrous hyperingelligence, and the bombardment of articles
about the ethics of artificial intelligence was just a warning before the
apocalypse that would finish all civilization. Nowadays when i read another AI
ethics article i just take it as proof that the AI bubble has reached
hyperinflation.

------
mrob
Wooly Definitions: The only definition that matters is ability to make correct
predictions about the future. All other things called "intelligence" are a
consequence of this. This definition is formalized with AIXI, which is
uncomputable, but computable approximations exist.

Hawking's Cat, Einstein's Cat: Scams require some intelligence from the
victim. The victim needs to mistakenly believe they're doing something smart.
Cats are too stupid to scam. Unlike humans they can't talk and they fail the
mirror test, suggesting no self awareness. Human behavior when confronted with
a deceptive superintelligence is not going to be the same as cat behavior when
confronted with a deceptive human.

Emus: The name "Great Emu War" is a joke. The humans had limited resources
available, because killing emus wasn't important enough to justify more. If we
really wanted to kill all the emus we could do it. We've made plenty of other
animals extinct. The motivation for an AI is set by its reward function, which
can be as high as necessary.

Slavic Pessimism: This argument suggest that building nuclear weapons is
impossible.

Complex Motivations: This isn't obvious nonsense, but consider that all
intelligences we've seen so far are the result of evolution, which tends to
produce a more complexity than needed. A leg is more complicated than a wheel.
A non-evolved intelligence would not necessarily have complex motivations.

Actual AI: Compare with the argument from actual primitive nuclear reactors,
which get mildly warm and never explode.

My Roommate: As a human, you roommate had a human reward function.
Unsurprisingly, he acted like a human. Why should a non-human reward function
result in human-like behavior?

Brain Surgery: Brain surgery would be a lot easier if you could take backups
of brains, and duplicate them, and you had no concern about killing the
patient.

Childhood: If this turns out to be needed, why can't increasing intelligence
result in increased ability to simulate an environment suitable for raising
superintelligences?

Gilligan's Island: There's no reason to assume an AI would be isolated. It
could have or gain access to the internet and most human knowledge, and the
mind architecture could contain many independent sub-minds.

Grandiosity: This depends on assigning ethical value to hypothetical humans,
which isn't obviously correct.

Outside Argument, Megalomania, Comic Book Ethics, String Theory For
Programmers: Ad-hominem.

Transhuman Voodoo, Data Hunger, AI Cosplay: Why should something be false
because it's deplorable? And why should something be false because it
encourages deplorable behavior?

Religion 2.0: Ted Chiang talked about the definition of "magic" in an
interview ([https://medium.com/learning-for-life/stories-of-ted-
chiangs-...](https://medium.com/learning-for-life/stories-of-ted-
chiangs-...)).

>Another way to think about these two depictions is to ask whether the
universe of the story recognizes the existence of persons. I think magic is an
indication that the universe recognizes certain people as individuals, as
having special properties as an individual, whereas a story in which turning
lead into gold is an industrial process is describing a completely impersonal
universe.

All religions require some element of magic. Even Buddhism, which is arguably
the least magical of all religions, treats consciousness as magic. AI requires
no magic, therefore it is not a religion.

Simulation Fever: Simulated universes do not have to be magical by Chiang's
definition. A universe could be simulated by something that pays no attention
to individuals within the simulation, eg. something that lets a large number
of universes run their course then examines them statistically. Possibility of
this increases the possibility of living in a non-magical universe despite
possibility of living in a simulation.

Incentivizing Crazy: This isn't an argument, it's a description of a field.
Perhaps the author meant it to be an ad-hominem: "the idea is false because
crazy people believe it".

~~~
TeMPOraL
> _Hawking 's Cat, Einstein's Cat: Scams require some intelligence from the
> victim. The victim needs to mistakenly believe they're doing something
> smart._

Only if your scam is too complicated for the victim :). Cats are pretty easy
to scam - you just have to stick to simple things. Cats respond predictably to
food they like. As a cat person, I find this trick to be more than enough for
all my needs ;).

This nitpick aside, I second all your points.

> _If this turns out to be needed, why can 't increasing intelligence result
> in increased ability to simulate an environment suitable for raising
> superintelligences?_

Indeed, we could feed the required stimuli to an AI faster than real-time. VR
is a thing already, and so is the "fast-forward" button.

------
ComputerGuru
I did read the article through, but made up my mind that the author is
presenting a flawed argument when I saw how quickly he skimmed through his
base premises, not really giving them much in terms of fair thought. In
particular, I don't see how he can so blithely assume so easily that there is
no quantum effect in the brain structure. I feel there's some degree of
arrogance there, as we have not yet even begun to unlock the inner secrets of
the brain nor come close to mimicking it. Our best approximations of
intelligence are analogous to the comparison between sewing by hand vs the
mechanics of a sewing machine, only the sewing machine remains far inferior
(at least for now).

~~~
robotresearcher
> so blithely assume so easily that there is no quantum effect in the brain
> structure.

He is following the mainstream opinion on this view, so does not have to
justify it in detail. Almost nobody subscribes to Penrose's proposal.

> Our best approximations of intelligence are analogous to the comparison
> between sewing by hand vs the mechanics of a sewing machine, only the sewing
> machine remains far inferior (at least for now).

There's a blithe assumption of your own, which you don't attempt to justify.
Newell and Simon's physical symbol system hypothesis is still taken seriously
by mainstream AI, and it has the opposing view: that intelligence is a
computational process, and thus we understand many fundamental things about it
already.

I'm not taking a position here, just clarifying what the dominant positions
are right now.

Also, sewing machines are better at sewing than 99% of people by any metric I
can think of, and faster than 100% of people. They also do probably >90% of
all the sewing on the planet. So I don't think your analogy does the work you
intended.

------
TheBlight
ITT: True Believers of the Church of The Singularity

------
pron
This is what I think about an imminent AI danger and those who believe in it:

1\. It’s not as near as you think. Current machine “intelligence” is hardly on
par with the abilities of an insect, let alone anything we’d call intelligent.
Yeah, it’s capable of analyzing a lot of data, but computers could always do
that. We don’t really know how to move forward, and there has been little
progress in theory in a long while. We haven’t made any significant
theoretical breakthrough in 20-40 years.

2\. Intelligence isn’t a general algorithm to solve _everything_. For example,
humans are not good at approximating solutions to NP-complete problems. Other
algorithms make a better use of computational resources to solve problems that
intelligence is not good at, and super-intelligence is not required to come up
with those algorithms, as they use brute force on cheap computing nodes.
Intelligence also isn't necessarily good at solving human problems, many of
which require persuasion through inspiration or other means.

3\. We don’t know what intelligence is, so we don’t know if the intelligence
algorithm (or a class of algorithms) can be improved at all. Simply running at
higher speeds is no guarantee of more capabilities. For all we know, an
intelligent mind operating at higher speed may merely experience time as
moving very slow, and grow insane with boredom. Also, we don’t know whether
the intelligence algorithm can multitask well to exploit that extra time.

But most interesting at all is what I think is at the core of the issue:

4\. Intelligence is not that dangerous — or, rather, it’s not so much more
dangerous than non-intelligent things. This is related to point 2 above. We
can obviously see that in nature, but also in human society. Power correlates
weakly with intelligence beyond some rather average point. Charisma and other
character traits seem to have a much bigger impact on power. Hitler wasn’t a
genius. But some smart people — _because_ they smarter than average —
fantasize about a world where intelligence is everything, and not in a binary
way, but in a way that gives higher intelligence non-linearly more power. This
is a power fantasy, where the advantage they possess translates to the power
they lack.

------
jessup
>AI alarmists are fond of the paper clip maximizer, a notional computer that
runs a paper clip factory, becomes sentient, recursively self-improves to
Godlike powers, and then devotes all its energy to filling the universe with
paper clips.

>It exterminates humanity not because it's evil, but because our blood
contains iron that could be better used in paper clips.

The consolation of the cingularity narrative is that it tells a story in which
the world is not—yet—being destroyed by paper clip maximizers—in cingularity
land, the nightmare scenario is still a ways off, and thankfully, with the
help of a few Ayn Rand types, market forces can still save us.

~~~
waqf
From your comment I'd like to segway into a discussion about how brand names
are causing people to forget how to spell normal English words.

------
noajshu
"We’ve learned that at least one American plutocrat (almost certainly Elon
Musk, who believes the odds are a billion to one against us living in "base
reality") has hired a pair of coders to try to hack the simulation."

\-->source?

~~~
idlewords
[http://www.newyorker.com/magazine/2016/10/10/sam-altmans-
man...](http://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-
destiny)

------
danielam
Apropos: [http://edwardfeser.blogspot.com/2015/02/accept-no-
imitations...](http://edwardfeser.blogspot.com/2015/02/accept-no-
imitations.html)

------
palladiol
About the we live in a computer simulation thing. When I look at the universe
or subatomic particles, I see physical reality and I don't see anything that
indicates a computer simulation. We just don't know the next step. Thinking we
live in a computer simulation as the next physical phenomenon for the
infinitely great just seems to me to display a lack of imagination rooted in
our time and resembles the belief in a god. I would tend to agree with the
author in that regard that some in our industry tend toward mysticism where
computers are the religion.

------
faragon
In my opinion, the problem is not intelligence per se, but intelligence
willing to change everything in a totalitarian/idealist way. No matter if it
is "artificial" or "natural".

------
ajamesm
> But of course we know that there are all kinds of configurations of matter,
> like a motorcycle, that are faster than a cheetah and even look a little bit
> cooler.

yet more inflammatory rhetoric from the Pinboard guy

~~~
mathgenius
He's usually so much funnier than this! Something must be going on. Maybe his
dog died.

------
MisterBastahrd
The problem with military coups is that a lot of the people who are involved
in them never survive the regime they create. When you are trying to
consolidate power as a figurehead, the first thing you do is to make sure that
anyone with questionable loyalties, whether true or not, are eliminated and
replace with people who are grateful to hold that position.

Intelligent people know this, which is why there really aren't that many
military coups, regardless of how those in the military feel about their
political masters.

------
erikpukinskis
The idea that huge quantities of computing power will lead to massively better
intelligence is like saying huge quantities of barbecue sauce will lead to
massively better spare ribs.

~~~
jaibot
This is not what superintelligence people are worried about, in general. The
human brain is already embarrassingly parallel. I'm sure you can find at least
one person who will advocate for the "Moore's Law=Doom" scenario, but you
won't find that argument endorsed by anyone currently working on AI Safety.

------
amai
Why is everyone afraid of artificial intelligence? I'm more afraid of natural
infinite human stupidity. Superintelligence would just balance that out ;-) .
But seriously here is my critique of Nick Bostroms arguments about
superintelligence: [https://asmaier.blogspot.de/2015/06/superintelligence-
critiq...](https://asmaier.blogspot.de/2015/06/superintelligence-critique-of-
nick.html)

------
erikbye
What the author is actually talking about is of course the mythological
techno-singularity event, which is of course bullshit.

Have you ever programmed any piece of software that suddenly implemented
features of its own? Did your program become sentient?

Laughable. These are child-like fantasies belonging in 50s sci-fi.

What I do know for a fact, is that when it comes to AI fear mongers, there is
not much intelligence to be detected.

~~~
Nyubis
>Have you ever programmed any piece of software that suddenly implemented
features of its own?

I'm not sure what particular fear mongers you're talking about, but Bostrom
and the like are talking about software that's specifically made to change and
improve itself on a fundamental level.

~~~
erikbye
I'm saying that's the pipe dream. Software won't change on its own.

~~~
Nyubis
We already have software that can change on its own. Neural nets are software
too, and with things like backpropagation they can update their weights,
essentially changing themselves.

I'm not saying that this level of change is enough to get the disaster
scenarios that Bostrom talks about, but it's a folly to say that self-changing
software can't possibly exist.

~~~
p1esk
Neural net changing its weights is no different from any other software
updating the values of its variables.

------
vinceguidry
I think brains very probably use quantum effects in ways we might not even be
able to study with anything close to today's technology. As a result,
individual neurons or groups of neurons can be way more complicated than we
are expecting them to be.

I'd say we're at least 2 major revolutions away from even coming close to what
a chimpanzee's intellect, much less a human's.

~~~
WorldMaker
This is at least partly relevant to the discussion: [http://www.smbc-
comics.com/comic/the-talk-3](http://www.smbc-comics.com/comic/the-talk-3)

As a shorthand "quantum effects" are often hand-waved as crazy powerful
"weird" things, but we can certainly model them and they really aren't as
"magic" as a lot of popular science would have one believe (as much as even
some of us otherwise rational people so strongly wish to believe in quantum
magic).

~~~
canadian_voter
Love the bit at the end: "Quantum computing and consciousness are both weird
and therefore equivalent."

Quantum consciousness doesn't sound very different from plain old vitalism.

------
bryanrasmussen
I think these are problems if indeed it turns out that intelligence correlates
with goal making and pursuing of said goals, which given the other article on
the front page at the moment (that I am totally intelligent enough to link to
but also unmotivated to do so) which says that intelligence is not linked to
success I just don't know that we can assume that correlation.

------
hyperion2010
A key missing piece for intelligence: how many measurements can you make on
the world? It doesn't matter how much computational power something has if it
can't make measurements and ask and test hypotheses.

Edit: should state that this is a corollary of "> The Argument From Actual
AI," but generalized to any 'intelligent' system, not just neural nets.

------
tboyd47
Glad to see a voice from the strong AI skeptic camp here. Reminds me of a book
I read a long time ago called "Great Mambo Chicken and the Transhuman
Condition." I used to drink the kool-aid myself until a friend of mine snapped
me out of it by saying, "Dude, you're telling me you actually want Skynet??" I
gave him my copy of the book.

------
carlosgg
The New Yorker did an article on Bostrom and the Future of Humanity Institute
last year:

[http://www.newyorker.com/magazine/2015/11/23/doomsday-
invent...](http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-
artificial-intelligence-nick-bostrom)

------
joebubna
The thing I think everyone should be most worried about is not robots deciding
to kill us, it's the economic upheaval that could result from robots that can
do jobs better than their human counterparts.

There's been studies done on this type of thing
([https://journalistsresource.org/studies/economics/jobs/robot...](https://journalistsresource.org/studies/economics/jobs/robots-
at-work-the-economics-effects-of-workplace-automation)) and so far use of
robots has mostly focused on helping human workers become more productive, and
not replacing them entirely. However, for lower skilled workers this isn't
always true, and if robots were able to replace even the most skilled
workers... that could cause problems for human employment. To quote the
article on current status:

"Robots had no effect on the hours worked by high-skilled workers. While the
authors found that industrial robots had no significant effect on overall
employment, there was some evidence that they crowded out low-skilled and, to
a lesser extent, middle-skilled workers."

Now continue that line of thinking and imagine a world where a robot could do
any job better than a human...

We could end up with a "is that American made?" or "is that free range
chicken?" type of scenario where companies that refuse to replace human
workers with robots are competing against other companies that will do
anything to lower costs, even if ethically questionable.

So then we potentially end up in a situation where the rich (Executives and
stock holders) get richer by replacing costly human workers with cheaper, more
efficient robots, and the wealth of the average family declines as people
struggle to find work. Alright, well maybe we give all humans an allowance to
live off of, food to eat, a home to live in, etc. Except...

Human beings need work. They need to feel a sense of purpose. I don't think
the humans from the movie Wall-E hold much appeal. Let's not go there.

Ok, so maybe we pass laws against replacing many human jobs with robots. Well,
if the robots are truly intelligent, aren't we then discriminating against a
group of sentient beings solely because they are too good at their job? Isn't
this just going to be a techno world version of the civil rights and lgbt
rights movements?

These are the things I worry about. Not robots killing me.

As a side note, I hope cyborg and other bio tech improves at some point, at
lot of these concerns could be mitigated if humans had the potential to
improve themselves beyond any normal evolutionary rates.

~~~
internaut
Look at Moravec's Paradox.

It's probably the most valid result from the research and yet it is
overlooked. We'll have AI capable of replacing the middle class before we have
robots capable of replacing the working class.

------
eltoozero
The quintessential "rouge AI" scenario, pre-terminator SkyNet but post
"Metropolis" is "Colossus: The Forbin Project".

It's finally available on widescreen blu-ray in Germany but Universal has
still not re-released it for the western audience.

The filmed exterior of Colossus HQ is Lawrence Hall of Science in Oakland, CA.

------
nojvek
I have to say how well maciej writes. I hope someday I am able to write and
express my thoughts as well as he can.

------
arisAlexis
I think that we vastly underestimate bias in this subject. Smart people are
afraid of AI or are afraid of change. This article belongs to the second
category of people that _wish_ things stay the same. There are also some very
weak arguments in it.

------
erikbye
As always it seems the fear mongers in AI do not actually know how programming
and "the machine" works.

Also, depending on how you measure intelligence, "machines" have been way
smarter than humans since the first calculator.

------
bootload
such a great read, here's the video ~
[https://www.youtube.com/watch?v=kErHiET5YPw](https://www.youtube.com/watch?v=kErHiET5YPw)

------
sonink
I am one of those who is convinced that AI will destroy humanity and
Elons/others efforts will not help.

Its hard for me to argue against the fact that this idea isn't eating me.

------
mavdi
The pace the world is moving towards its doom, one can be forgiven to think
the Internet has gained consciousness and pulling tricks to get us all killed.

------
zaroth
I feel like this essay is missing a key point around defining intelligence.
Machines can be trained to emulate intelligence, but machines are not
themselves intelligent.

Our neural nets can drive cars, convert speech to text, recognize images, and
maybe even carry on a conversation.

But there's no light behind the eyes. It's all synthetic. It's an emulation
which mimes intelligence through the brute force of observing billions of
inputs and their associated outputs.

What it decisively cannot do, is something that it never saw before, except by
mistake.

Wake me when none of this is true?

~~~
idlewords
There's a longstanding philosophical argument about this. Google 'Searle's
Chinese Room'.

~~~
zaroth
I believe it's fair to say that a computer programmed with knowledge of
sufficient numbers of inputs and outputs does indeed "understand" \-- in fact
the computer can truly speak and converse in Chinese quite well. But I believe
it's a distinction without a difference to say the Computer does not, in fact,
_understand_ Chinese in this case.

Instead of arguing semantics I propose a different sort of distinction.

A computer programmed as a deep neural net can understand remarkably well, and
with that understanding it becomes a remarkable tool for automation.

However it remains nothing more than a tool. No more intent than a hammer. And
without will, certainly without will to evolve.

Only in the sense that the algorithm is programmed to improve its fitness, it
calculates coldly towards that end. Not ever in an innovative sense, and
certainly not in an adversarial sense.

I agree thoroughly with the other commenters that propose it's not that AI
will defeat us, but rather AI will be just too useful that the economic damage
will be extreme.

AI will defeat us by replacing the need for us in all productive endeavors.
Anything we can do it can do better, cheaper, so far up the value chain that
only the elite will remain gainfully employed.

Too much of human labor will be eaten by AI, we better get a hell of a lot
better at educating our masses if we ever hope they will have something
productive they're able to do.

I wonder how the theory of competitive advantage stacks up against AI -- a
good for which there is no scarcity!

------
mirimir
What's interesting is the evolution of consciousness. In the long term, does
it matter whether or not it's embodied in meat?

~~~
placebo
I distinguish between consciousness and self-consciousness, but as to your
question, I don't think it matters at all in what form self consciousness
takes place, and the moral thing to do when something appears to be self-
conscious is to treat it with the same kindness you would expect for yourself
(unless it is obvious that this something is a clear and present danger). I
can totally see how from a moral perspective a human should be put on trial
and imprisoned for killing a sentient "machine" \- though unfortunately,
looking at how as a society we treat animals and many times look the other way
at mass cruelty to our own kind, I'm not really sure I'd mind a super-
intelligence (and perhaps as a result also benevolent) ruling us instead of
the other way around.

Of course, there's the possibility that a super-intelligence would not be able
to break through the human egocentric limitations that may have been built
into it by its human creators, and in that case, we're screwed...

~~~
mirimir
Right, self-consciousness.

I see evolution generally as selection in various configuration spaces.
Specifics at various levels -- cosmogenesis, nucleosynthesis, life,
consciousness, ... -- are different, of course. But it's arguably the same
process.

------
lixxz
If AI is able to self-improve, don't you think it will reach to a level of
complexity where it will start asking, why?

------
arieskg
Say that the simulation time period is 1 billion years. To the post-
singularity, a few thousand years is the minor details in history, like what
Napolean had for breakfast the morning before the Battle of Waterloo.

Let the thoughts flow as far as you'd like, but come back to reality once in a
while. After all, your ideas are conditional to your body.

------
arisAlexis
I published this as a kind of answer :
[https://medium.com/@arisAlexis/superintelligence-a-biased-
ar...](https://medium.com/@arisAlexis/superintelligence-a-biased-
argument-62ba7aa57cc8#.z34llfk8n)

------
jeisc
Could AI be programmed to have intuitions nor irrational behaviors?

------
contingencies
Tangent: What's the tech scene like in Zagreb?

~~~
nikolaplejic
It's pretty good.

There are several hackerspaces
([https://wiki.hackerspaces.org/Zagreb](https://wiki.hackerspaces.org/Zagreb)
\- I'm partial to the one in Mama), and every now and then we'll gather around
a "Nothing will happen" ([http://www.nsnd.org/](http://www.nsnd.org/)).

There's a vibrant meetup scene (well-covered by meetup.com), and WebCamp
Zagreb ([https://2016.webcampzg.org/](https://2016.webcampzg.org/)) is the
community conference that tries to gather different meetups & communities once
a year.

Companies from abroad tend to open development offices here to exploit the
cheaper workforce, especially since Croatia has joined the EU. There's also a
number of local companies that are constantly hiring, resulting in a solid
amount of hiring opportunities even for the part of the crowd that's a bit
pickier.

People are leaving Croatia in general, though, and the tech community isn't
immune to that. Lots of people moved to other EU countries, and although
that's not unexpected at all, I believe it's left a dent within all of the
above.

If you decide to visit, give me a shout. :)

~~~
idlewords
The parent is one of the organizers of this conference, and I just want to say
what a wonderful group ran WebCamp, and how welcoming and friendly they were.

If you get a chance to speak there, or attend, take it!

------
JepZ
Shouldn't a superintelligence be smart enough to become another Gandhi? Why
not?

------
bronlund
I for one welcome our new superintelligent overlords. I can't imagine they
could do much worse than us. [https://xkcd.com/1626/](https://xkcd.com/1626/)

------
zebraflask
Unpersuasive premises. Obvious alarmism.

------
mjgeddes
Let me just elaborate on the ‘complex motivations’ idea, because I certainly
think that ‘orthogonality’ is the weak point in the AGI doomsday story.

Orthogonality is defined by Bostrom as the postulate that a super-intelligence
can have nearly any arbitrary goals. Here is a short argument as to why
‘orthogonality’ may be false:

In so far as an AGI has a _precisely_ defined goal, it is likely that the AGI
cannot be super-intelligent. The reason is because there’s always a certain
irreducible amount of fuzziness or ambiguity in the definition of _some_ types
of concepts (‘non-trivial’ concepts associated with values don’t have
necessary definitions). Let us call these concepts, fuzzy concepts (or
f-concepts).

Now imagine that you are trying to define the goals that will let you specify
that you want an AGI to do _precisely_ , but it turns out that for certain
goals there’s an unavoidable trade-off: trying to _increase_ the precision of
the definitions _reduces_ the cognitive power of the AGI. It’s because non-
trivial goals need the aforementioned ‘f-concepts’, and you can’t define these
precisely without over simplifying them.

The only way to deal with f-concepts is by using a ‘concept cloud’ – instead
of a single crisp definition, you would need to have a ‘cloud’ or ‘cluster’ of
multiple slightly different definitions, and it’s the totality of all these
that specifies the goals of the AGI.

So for example, such f-concepts (f) would need a whole set of slightly
differing definitions (d):

F= (d1, d2, d3, d4, d5, d6…)

But now the AGI needs a way to integrate all the slightly conflicting
definitions into a single coherent set. Let us designate the methods that do
this as <integration-methods>.

But finding better <integration methods> is an _instrumental_ goal (needed for
whatever other goals the AGI must have). So unavoidably, _extra_ goals must
emerge to handle these f-concepts, in addition to whatever original goals the
programmer was trying to specify. And if these ‘extra’ goals conflict too
badly with the original ones, then the AGI will be cognitively handicapped.

This falsifies orthogonality: f-concepts can only be handled via the emergence
of _additional_ goals to perform the internal conflict-resolution procedures
that integrate multiple differing definitions of goals in a ‘concept-cloud’.

In so far as an AGI has goals that can be _precisely_ specified, orthogonality
is trivially true, but such an AGI probably can’t become super-intelligent.
It’s cognitively handicapped.

In so far as an AGI has _fuzzy_ goals, it can become super-intelligent, but
orthogonality is likely falsified, because ‘extra’ goals need to emerge to
handle ‘conflict resolution’ and integration of multiple differing definitions
in the concept cloud.

All of this just confirms that goal-drift of our future descendants is
unavoidable. The irony is that this is the very reason why ‘orthogonality’ may
be false.

------
kodfodrasz
I'm not sure if this is meant to be funny somehow, or taken seriously, but the
series of strawmans as arguments, eg. referencing some american sitcom as an
example to a phenomen is pretty weak. It is sad, because I can agree with many
ideas, but the flawed reasoning is not convincing for someone critical, and
wastes those arguments...

Although the read is ovarall interresting, but I find the style and arguments
sub par, too american for my taste.

------
vacri
It puzzles me that people think that the nanosecond some superintelligence
comes into being, that it also has the capacity to destroy
humans/earth/whatever. No thought is given as to how this intellect gets said
capacity, bar 'launch the nukes'.

Seriously, if "it" turned against "us", we'd have the upper hand. For example,
quality electronics are hard to make without sending African children down
into mines and having African adults shoot each other over the results (ie:
conflict minerals). If a superintelligence is reliant on us to propagate its
physical-world interactions, we're going to be just fine.

I mean come on, we all work in IT, and we all know just how difficult it is to
keep hardware running securely, safely, and in good order. Stuff fails all the
time. Similarly, we all know people who are _really intelligent_ , but this
doesn't translate to success in life for them.

In short, "being intelligent" isn't enough - the entity also needs ways to
effectively work the world around them.

edit: heh, just saw another article on the front page: "IQ is only a minor
factor in success"

------
balopez
Ossining news

------
balopez
hello everybody

------
kowdermeister
Am I the only one who thought that the points he came up with to refute AGI /
ASI actually made the concerns deeper?

------
gfody
We should figure out intelligence before speculating on super-intelligence. I
think Kurzweil and Hofstadter have a compelling model in 'How to Create a
Mind' and 'Surfaces and Essences: Analogy as the Fuel and Fire of Thinking',
but it's not exactly rigorous and we still haven't created anything that could
pass the Turing test which wouldn't even require _high_ intelligence - just
2nd grade level or something.

------
brotherjerky
> I can't point to the part of my brain that is "good at neurosurgery",
> operate on it, and by repeating the procedure make myself the greatest
> neurosurgeon that has ever lived. Ben Carson tried that, and look what
> happened to him.

Nice, but does it fall under political ban?

~~~
idlewords
The political ban ended two days after it started.

~~~
brotherjerky
Good to know, it was a terrible idea.

