
Hostile AI: You're soaking in it - mortenjorck
http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-soaking-in-it.html
======
brudgers
Larry Page's view that the last best hope for humanity is some white guy with
the hubris to tell the residents of Camden that his not being allowed to sell
$60,000 luxury cars is the most pressing issue they faced, didn't surprise me.
The American notion of Manifest Destiny isn't dead. It's not even past.

The mistake that many people of the developed world seem to make is to think
that the Page's of the world see us as a distinct class apart from the billion
global poor who live on less than $1 a day. It's a straight line from their
suffering being just the business cost of sending rockets to Mars, to our
$60,000 a year lives being the cost of doing it at scale.

Right now, we can be comfortable in sharing Page's and Elop's abstractions.
People we know aren't effected. When it comes our time, the best we can hope
for is to be the Sixth Army on the Volga and get some lip service that our
suffering was honorable and our death glorious and will go down in history a
thousand years hence.

Look up your street to the left, then down it to the right. If you don't see a
native American, it's probably you.

~~~
lnanek2
I do feel for people dying of Polio in third world countries. But it is just
silly to think that Gates style philanthropy of saving those people is going
to change the world, though. It just means a few more people in third world
countries will live who might not, and those countries might perform a little
better GDP wise and take care of their people a little bit better, and a
microscopic few of those people may someday make a change in the world by
inventing something or creating some huge company or something.

Meanwhile, getting everyone in the US to use electric vehicles instead of gas
would make a huge difference in the world. A society where transportation
costs are much lower will be a very different society. Throw in Google self
driving cars and we may not be mostly jammed into crime ridden cities any
more, for example, since commuting would be a few pennies of electricity
sitting in a cabin sleeping or working on a laptop instead of driving.

~~~
brudgers
Polio eradication is only one of many health initiatives funded by the Gates
Foundation. It is an outlier in terms of total lives saved and was selected
because the funding could leverage several generations of polio mortality
reduction initiatives (iron-lungs, Saulk vaccine, Rotary PolioPlus) over into
complete eradication.

A million children a year die from gastro-intestinal problems and diarrhea
[http://www.gatesfoundation.org/What-We-Do/Global-
Health/Ente...](http://www.gatesfoundation.org/What-We-Do/Global-
Health/Enteric-and-Diarrheal-Diseases)

1.3 million children died from pneumonia in 2011:
[http://www.gatesfoundation.org/What-We-Do/Global-
Health/Pneu...](http://www.gatesfoundation.org/What-We-Do/Global-
Health/Pneumonia)

200 million people were afflicted with malaria in 2010:
[http://www.gatesfoundation.org/What-We-Do/Global-
Health/Mala...](http://www.gatesfoundation.org/What-We-Do/Global-
Health/Malaria)

Sure, you probably don't know any of those people. They don't have laptops or
cars or spare cabins in the woods. And they certainly don't hang out at
libertarian circle jerks.

------
gwern
For the sake of argument, let's assume OP is correct: the finance industry and
corporations in general are indeed blood-sucking vampire squids jamming their
funnels into everything that smells of money and are antithetical to human
values. (I strongly disagree with it, but others are criticizing that aspect,
so I'll move on.)

So what? How does this justify his conclusion:

> But they have chosen a fantasy version of the problem, when human interests
> are being fucked over by actual existing systems right now. All that brain-
> power is being wasted on silly hypotheticals, because those are fun to think
> about, whereas trying to fix industrial capitalism so it doesn’t wreck the
> human life-support system is hard, frustrating, and almost certainly doomed
> to failure.

Even if the finance industry / corporations are a parasite, so what? That
doesn't wish away AI: AI will still remain a threat (or not a threat)
regardless of whether Goldman Sachs is dissolved. Threats are not a zero-sum
game: if you show that finance is a parasite, that doesn't suddenly alter the
universe to guarantee AI is safe and easy.

Even if we interpret him charitably as making a nuanced cost-effectiveness
argument that 'we face many threats, have very limited resources, and
corporations are more cost-effective to fight than working on AI', that still
seems pretty darn unlikely. Corporations are a major political issue which
attract the attention and effort of hundreds of thousands of people and
inspire real world action like Occupy Wall Street; why does this fight need a
few more people in it? Diminishing returns set in a very long time ago. And
fundamentally, financial types have (or have not) been parasites throughout
history. Their impact is bounded. They are not anything new under the sun.

Whereas with AI, we don't know whether AI will ever exist; whether the impact
will be huge or close to zero; hardly anyone takes it remotely seriously
(quick question: have you ever read a mainstream article which did not make
either a _Termintor_ or 'Rapture of the Nerds' allusion?); and the sum total
of all efforts on thinking about the topic is, what, maybe $7m a year?
(Google's safety board + MIRI + FHI + the new existential risk org whose name
escapes me). You have to assume some pretty extreme values to make cost-
effectiveness criterion spit out 'nobody should be thinking about AI! Everyone
should be swotting up their Marx!'

Conflating socially-harmful corporations with 'hostile AI' does both topics a
disservice.

~~~
mtraven
OP here, nonplussed that his obscure blog suddenly is near the top of Hacker
News.

First, I don't think that corporations are nothing but parasites, and said as
much in the article. I did say that finance was parasitical. That was probably
too strong. Allocation of capital is a useful function; the problem is that
the allocators are helping themselves to a far larger cut of the proceedings
than the value they produce (and I don't particularly want to argue that here,
just wanted to clarify what I meant).

Second, the more central point about whether the Singularity Institute types
are wasting their time and should refocus their thinking. It is true that not
many people are thinking about the effects of superhuman AIs, and there's no
harm and possibly some good in a few smart weirdos focusing on that,
especially since corporations have been around and critiqued for so long.

However, we are so far away from producing anything remotely close to that
level of AI, that I fear their work is ungrounded. It strikes me as fun quasi-
SF rather than serious engineering.

But I take believe they are seriously interested in world improvement, so my
suggestion to them in their work on "benevolent goal architectures" is to
study how existing goal architectures work, that is to look at existing
economic and social systems where disparate individual goals are coordinated
into cooperative or conflicting action, and how the results are related to
human goals and human flourishing.

For my part, I am in the process of trying to understand their friendly AI
work better so I critique it from a more informed standpoint.

~~~
gwern
> However, we are so far away from producing anything remotely close to that
> level of AI, that I fear their work is ungrounded. It strikes me as fun
> quasi-SF rather than serious engineering.

At what point would you allow research to go forth? And you know, there's a
lot of work that can be done which is not 'directly coding an AI'.

> But I take believe they are seriously interested in world improvement, so my
> suggestion to them in their work on "benevolent goal architectures" is to
> study how existing goal architectures work, that is to look at existing
> economic and social systems where disparate individual goals are coordinated
> into cooperative or conflicting action, and how the results are related to
> human goals and human flourishing.

But why would one expect any of that to transfer to AI? What do the internal
dynamics of a Board of Directors have to do with, say, the Code Red worm? What
lessons can we extract from 501(c)3 nonprofits which will tell us anything
about deep-learning-based architectures? Do CEO salaries really inform our
understanding of Moore's Law? Or can study of Congressional lobbying seriously
help us better understand progress on brain scanning and connectomes? Do
Marxian dialectics truly help us improve forecasts for when feasible
investments in neuromorphic chips will match human brains?

The closer I look at corporations and modern economies, the more worthless
they seem for understanding the possibilities of AI, much less engineering
safe or powerful ones. Modern economies are based on large assemblages of
human brains, acculturated in very specific ways (remember Adam Smith's
_other_ major work: _The Theory of Moral Sentiments_), which are limited in
many ways, and which are fragile and nonrobust: consider psychopathy, or
consider economic's focus on 'institutions'. Why do some economies explode
like South Korea, and others go nowhere at all? Even with millennia of human
history and almost identical genomes and brains, outcomes are staggeringly
different. (You complain about corporations; well, how 'friendly' is North
Korea?)

And this is supposed to be _so_ useful for understanding the issues that we
should be focused on your favored political goals instead of directly tackling
the issues?

~~~
akiselev
> But why would one expect any of that to transfer to AI? What do the internal
> dynamics of a Board of Directors have to do with, say, the Code Red worm?
> What lessons can we extract from 501(c)3 nonprofits which will tell us
> anything about deep-learning-based architectures? Do CEO salaries really
> inform our understanding of Moore's Law? Or can study of Congressional
> lobbying seriously help us better understand progress on brain scanning and
> connectomes? Do Marxian dialectics truly help us improve forecasts for when
> feasible investments in neuromorphic chips will match human brains?

Studying human systems is one of the best ways of studying complex systems and
systems engineering, which are already crucial for complex engineering
projects, like developing a complex AI. Before we can even talk about our
future binary overlords being hostile or friendly, we will have to study how
basic, but constantly developing, AI integrates and plays off of human social
systems. We have to gather quantified data about how two distinct forms of
intelligence interact and what, if any, conclusions can be generalized to a
future where humans are no longer the species with the highest intelligence.

You have no data about how real AI's would behave in our society except for
fiction, which contains no more guidance now than the Bible did for 16th
century astrophysics. We have no consistent models that explain our own
intelligence, let alone an artificial one that has yet to exist. You can
pontificate about Plato's ideal Terminator but it won't make a bit of
difference until we get our telescope.

~~~
gwern
> Studying human systems is one of the best ways of studying complex systems
> and systems engineering, which are already crucial for complex engineering
> projects, like developing a complex AI.

What does it mean to study a generic 'complex system' and 'systems
engineering' and what does this have to do with estimating the potential risks
and dangers?

> we will have to study how basic, but constantly developing, AI integrates
> and plays off of human social systems.

This presumes you already know all about what the AI will be and is putting
the cart before the horse.

> We have to gather quantified data about how two distinct forms of
> intelligence interact and what, if any, conclusions can be generalized to a
> future where humans are no longer the species with the highest intelligence.

Consider an aborigine making this argument: 'we have observed their firearms
and firewater, and know there are many unknowns about these white men in their
large canoes; if we look at their capabilities, our best analyses and research
and extrapolations certainly suggest they could be a serious threat to us, but
we must reserve judgement and quantify data about how our forms of
intelligence will interact with theirs'.

> You have no data about how real AI's would behave in our society except for
> fiction, which contains no more guidance now than the Bible did for 16th
> century astrophysics

Really? We know _nothing_ about AI and our best guesses are literally as good
as random tribal superstitions?

> We have no consistent models that explain our own intelligence, let alone an
> artificial one that has yet to exist.

Someone tell the psychologists and the field of AI they have learned nothing
at all which could possibly inform our attempts to understand these issues.

------
saucetenuto
Usually in these discussions we use the term "Unfriendly AI" instead of
"Hostile AI". That gets at an important distinction: these AIs don't _want_
humanity to die, it's just that they don't want humanity to _live_ and they're
presumed to have the ability to steal all our resources for themselves.
Humanity still dies, but only incidentally. The author discusses this point a
little but I think it's important enough to put it front and center, in our
terminology.

It's interesting to think of corporations as being Unfriendly in this sense.
The analogy isn't perfect, though: humans make up a corporation's computing
substrate, so they're forced to value humans more than the classic Unfriendly
AI would.

~~~
dnautics
is there a third category of AI which doesn't necessarily care to live or not,
but winds up destroying humanity because of its Lennie-like power (possibly
even in the service of humanity)?

~~~
saucetenuto
Maybe? I'm not sure what you mean.

~~~
eurleif
I took it to mean
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

------
madaxe_again
The point here is that the generation of wealth, which is the conversion of
labour and marginal efficiency into a number in a ledger, is not in and of
itself aligned with human interests - it turns valuable labour and
intelligence into a store of labour-time exchange, which can be used to
purchase more labour, rather than any absolute benefit. The problem arises
when the store of capital grossly exceeds available useful labour, which it
does now in the hands of those who hold it. This causes inflationary forces,
the results of which are very visible, and in this way the excess labour-time
is simply _destroyed_ , as it becomes devalued.

Money is a highly inefficient way of improving the world.

~~~
AndrewKemendo
>Money is a highly inefficient way of improving the world.

True vis a vis itself - as money in and of itself does not have an ethical
"direction." So I would say you are throwing the bullion out with the
bathwater.

Money is a very efficient way of focusing effort - as your example indicates.
How it is focused then is up to what we collectively value and how much we
want to invest in determining where it will find value.

------
dnautics
"Corporations are at least somewhat constrained by the need to actually
provide some service that is useful to people. Exxon provides energy,
McDonald’s provides food, etc. The exception to this seems to be the financial
industry. These institutions consume vast amounts of wealth and intelligence
to essentially no human end."

Well, no. Banks provide liquidity, investment opportunity, and loans. Just
like McDonald's, there is a cost to this service, and the negatives of this
cost may or may not be worth the "positive constraint" but to say there is no
positive constraint is utterly rediculous.

"Of all human institutions, these seem the most parasitical and dangerous."

What of those institutions that wage wars and actually people killed?

~~~
eitland
> What of those institutions that wage wars and actually people killed?

keep in mind that these are the same institutions that at some point saved the
rest of the European jews, the rest of the European gays, the rest of the
European gypsies, the rest of the European communists, mentally or physically
disabled etc.

~~~
bcoates
Nations: they haven't killed _everyone_

------
jerf
It's a valid point on its own, but it's not a valid counterargument against
the Singularity Institute. Human gatherings are still created from humans,
which constrains the space of what they are, what they can do, and what they
can think. Arguably they've evolved along with us, and what we have today are
still logical extensions of the political structures we've been evolving for
the last few million years.

AIs have no such constraints. However alien a financial corporation may be, it
might as well be a single human compared to what AIs could be. If the problem
is real, it's a different problem, and the problem of large human corporations
is only the smallest, smallest taste of the problem.

------
NoodleIncident
The jab at the financial industry is unnecessary, but the actual point is very
interesting.

I first heard of this as the 'Santa Claus' argument on a forum thread. Short
version: just because you can't point at a single person and say 'he is
Santa', doesn't mean that Santa doesn't exist. Despite living solely as a meme
in people's heads, he has desires (make kids happy on Christmas, make kids
behave), and can act on them (put presents under the tree).

Those who supported this position used Microsoft as an example. 'Microsoft',
like every company, is at its core an idea or a meme that everyone just
accepts exists. Parts of it get written down, of course, and various people
hold different parts of it in their head. Nevertheless, the company is its own
entity, with its own goals and means of achieving them, yet exists only in
people's heads.

~~~
pdonis
But since Santa only exists as a meme in people's heads, Santa can't do
anything without the people in whose heads the meme exists doing something.

Similarly, corporations can't do anything without the people that own them and
work for them doing something. So viewing corporations as quasi-autonomous
agents is misleading; they can be viewed as collective agents in some
respects, but in the end everything they do has to come down to some human or
humans taking some actions.

Basically, the author of this article is punting: he's saying the problem is
corporations being "effectively independent of human control", instead of
actually looking at the actual humans whose decisions and actions constitute
what the corporations do.

~~~
sutterbomb
I don't think he's punting. He's saying that actually looking at the actual
humans is misguided, because they can and will be replaced or exchanged with
no effect on the system as a whole. At the micro level, a department or even a
company may change course in a noticeable way with one person being replaced.
But industry wide, and society-wide, the "actual people" are mostly irrelevant
to the mechanics of the system.

~~~
pdonis
_they can and will be replaced or exchanged with no effect on the system as a
whole_

Which just punts on the next question: why? Who set up the corporate structure
that way? Answer: other humans.

Corporations and othe large organizations don't come into existence by magic,
and they don't develop structures that make people, even at the CEO level,
into replaceable parts, by magic. Human beings have to make choices for those
things to happen. Human beings can also make choices to _change_ those
structures, if that is what needs to be done. But to do that you have to stop
ignoring the fact that it is human agency, not some amorphous "corporate"
agency, that is causing the problems in the first place.

~~~
JoeAltmaier
BUt corporations don't behave like human beings; they are not run by moral
human motives. You keep your job at a corporation by promoting the corporate
interest. That means profits, efficiency, etc. If you fail to make your
numbers, you'll be replaced by someone who will.

So corporations are definitely NOT run by humans behaving sensibly, but
instead by a strange machinery specific to the corporate ecosystem.

~~~
pdonis
_But corporations don 't behave like human beings_

Corporations don't "behave" at all. What you really mean is that human beings
behave differently when they work for corporations:

 _You keep your job at a corporation by promoting the corporate interest._

That's true, but that by itself doesn't make the behavior immoral. If the
corporate interest is best served by making products that people actually want
or need, and selling them at a fair price, then keeping your job by promoting
the corporate interest is perfectly moral.

If, on the other hand, the corporate interest is best served by something
else, then we, as humans, need to revisit the whole issue of corporate
governance: what is it that puts corporate interests out of alignment with our
interests as human beings and members of society? And you can't just say "get
rid of corporations" because corporations are necessary: without them we
wouldn't have enough food, wouldn't have houses, wouldn't have cars, wouldn't
have computers or the Internet, etc., etc.

None of us can survive purely by our own efforts; we need to be able to
specialize and trade, and we need to engage in collective projects that
required the coordinated efforts of many people. Corporations are a necessary
part of doing that. The fact that the legal powers we give corporations have
been abused does not mean all corporations abuse them. And the abuse doesn't
happen because corporations magically do things without humans doing things;
the abuse happens because some humans use corporations as tools to prey on
other humans instead of providing value.

------
yukichan
If corporations are so evil why has their pay fed my family, paid for my house
and created so many amazing products we surround ourselves with. It's
ridiculous. Big organizations have problems, but to pronounce them as
inherently evil ignores all the good they do. Maybe you have a different idea
of good in mind, in which case we'll probably never agree on anything. When
people organize they can create amazing things, and that's what corporations
are, an organization of people: employees, customers, and investors. And they
do amazing things. Go to a country that doesn't do so well at creating
functioning organizations and see the difference.

~~~
Florin_Andrei
> _If corporations are so evil why has their pay fed my family, paid for my
> house and created so many amazing products we surround ourselves with._

"If farmers are so evil why have they fed my family, paid for my house and
created so many amazing products we surround ourselves with" \- said the cow.

~~~
Houshalter
I think you and the other replies are missing the point. This is by far the
most prosperous time in human history. It's not like we are living in some
dystopia caused by corporations. It's certainly not anything compared to
unfriendly AI.

~~~
Florin_Andrei
> _This is by far the most prosperous time in human history. It 's not like we
> are living in some dystopia_

Believe me, I don't disagree with that. Maybe I should emphasize that, even
though I live in the Silicon Valley now, I grew up under communist
dictatorship in Eastern EU. So I think I know a thing or two, from a
_practical standpoint_ , about Dystopia - the indoctrination, the forced
labor, the rationing of electricity, fuel and food, the permanent fear of the
secret police. That - I have not forgotten, for how could I.

All I'm saying is - this golden age could be _even better_ , far better, if it
weren't for the constant skimming of the cream that goes on all the time at
the top.

If anything, it's the relative prosperity that makes it unlikely that the
situation will ever be fixed. People don't rise as one, in anger, unless they
are literally hungry and cold, in a physical sense. That, too, I know from
first-hand experience. I've lived the incandescent emotions of the rioting mob
while being one of them, out on the streets, 25 years ago, back home, in the
dead of winter.

But when they are more or less sated, and provided with adequate (if not
highbrow) entertainment, people will allow astonishing amounts of corruption,
bribery, and downright theft to keep going at the top of the pyramid of money.

Is my belly full now? Yes. Am I more or less free to do most of the things I
want? Yes. But do I still see corruption and injustice at the top, to the
benefit of Big Money? Yes.

I guess I'm one of those people who are strongly motivated by _principle_.
Injustice remains injustice no matter how pretty the makeup.

------
Hydraulix989
I think there's other forms of hostile AI too like the algorithms that
determine what advertisements to show in order to pollute your mind with
intentionally-injected memes.

~~~
thibauts
It's not pollution, only a market-share of your thoughts and actions.

------
fennecfoxen
Naah, corporations aren't independent profit-seeking AIs. They don't bother
making money except insofar as their owners drive them to. Even in that state,
they're subject to principal/agent conflicts in which people seek to turn the
company's revenue into their own overpriced salaries / stock options.

Now, government bureaucracies may be another story, especially once they grow
lobbying arms ;)

------
guscost
A corporation or a government can appear to be autonomous in the same sense
that a heroin addict can appear to be autonomous.

Good luck convincing them to prioritize anything other than finding more
heroin.

------
inetsee
There is a link to a "book-length pdf" at
"[http://singularity.org/files/CFAI.pdf"](http://singularity.org/files/CFAI.pdf"),
but the link shows up as a 404. I am curious about what the "book-length pdf"
discusses, but my best efforts at finding it have failed.

Anybody have an alternate link, or more information about where it might be
found?

~~~
Tenobrus
I'm pretty sure it's now
[http://intelligence.org/files/CFAI.pdf](http://intelligence.org/files/CFAI.pdf).
The organization in question changed its name and domain somewhat recently.

~~~
mtraven
Fixed in post, thanks (and insert some juvenile snark about how-are-you-gonna-
make-a-superintelligence-when-you-can't-even-keep-your-links-from-rotting?
here).

~~~
gwern
(Redirects would be the responsibility of, and reflect on, the organization
which bought the domain, not the one which sold it.)

------
webmaven
cstross covered some of this same territory a while back (except that rather
than 'unfriendly AIs', corporations are 'invading aliens'):
[http://www.antipope.org/charlie/blog-
static/2010/12/invaders...](http://www.antipope.org/charlie/blog-
static/2010/12/invaders-from-mars.html)

------
lifeisstillgood
Which logical fallacy / form of argument is being raised here? It's a match
for political issues (Say Scottish Independance - if we have AI /
Independance, then we can do "improve human race / Create fairer society" \-
but then the argument goes why not just do the "fairer society thing" now as
best you can

perhaps it is the "What are you really waiting for?" argument (probably a
favourable shift in power, to answer my own question)

------
sharemywin
Financial institutions provide value. not sure how you buy a house without a
mortgage. Not sure how you come back from your house burning down without
insurance.

------
tptacek
To the extent that this is true, it seems like it's been true for pretty much
the whole of human history.

------
davidgerard
>All that brain-power is being wasted on silly hypotheticals, because those
are fun to think about, whereas trying to fix industrial capitalism so it
doesn’t wreck the human life-support system is hard, frustrating, and almost
certainly doomed to failure.

This is one of my big problems with LessWrong (and I've been reading it four
years): for all their claims of relevance, a startling unwillingness to
question the social structure in a manner that would decrease the privileges
of Bay Area techies like themselves. Politics is the _mind-killer_ , don't you
know.

The offshoot Effective Altruist movement has the same problem: throw money at
the symptoms (and denigrate anyone who doesn't do the same) while noticeably
never, ever questioning the system the problems are in the context of.

------
bachback
DAC =>
[https://en.bitcoin.it/wiki/Distributed_Autonomous_Community_...](https://en.bitcoin.it/wiki/Distributed_Autonomous_Community_/_Decentralized_Application)

------
fu9ar
I'm glad that all of those intelligent idiots are stuck writing HFT algorithms
for a middling salary on Wall Street so that the rest of us who are playing a
better game can get on with life.

Money isn't everything.

------
codeulike
Brings to mind Accelerando by Charles Stross, in which the baddies are post-
singularity AI/corporation hybrids.

------
UK-AL
Finance gets capital in the right place, at the right time. That provides
value.

------
harrystone
A blog about the evil corporations, hosted by Google. I love the internet.

------
zeno334
So McDonalds is ok because it is at least somewhat restrained by the need to
offer a useful good (food) but the financial industry is parasitical and evil
and offers nothing of use? Seriously?

~~~
sp332
McDonald's isn't "ok", but it's limited in how evil it can be because it has
to give some benefit to other people. A hostile AI could be even worse,
because it doesn't have that constraint.

~~~
randomdata
The financial system is what provides farmers with the necessary capital to be
able to grow the food McDonalds needs to be in business. That doesn't
necessarily make them okay either, but isn't allowing people to eat, without
having to worry about producing their own food, at least of some benefit as
well?

~~~
sp332
Financial institutions aren't inherently bad, but the ones we have are bad.

~~~
randomdata
Doesn't 'financial institution' cover a pretty wide gamut? There are small
institutions that exist in only a small town, to government owned entities, to
giant businesses that span the entire world. Are they all categorically bad?
What can be done to make them less bad?

