
Should AI Be Open? - apsec112
http://slatestarcodex.com/2015/12/17/should-ai-be-open/
======
poppingtonic
'If you were to come up with a sort of objective zoological IQ based on amount
of evolutionary work required to reach a certain level, complexity of brain
structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo
erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100
is the difference between “frequently eaten by lions” and “has to pass anti-
poaching laws to prevent all lions from being wiped out”.'

[EDITED: the intended quote is below. the quote above is the next paragraph of
OP, which is only slightly less relevant than the intended one]

'Why should we expect this to happen? Multiple reasons. The first is that it
happened before. It took evolution twenty million years to go from cows with
sharp horns to hominids with sharp spears; it took only a few tens of
thousands of years to go from hominids with sharp spears to moderns with
nuclear weapons. Almost all of the practically interesting differences in
intelligence occur within a tiny window that you could blink and miss.'

Yudkowsky's position paper on this idea explains this in more detail:
[http://intelligence.org/files/IEM.pdf](http://intelligence.org/files/IEM.pdf)

------
mindcrime
So, here's a random thought on this whole subject of "AI risk".

Bostrom, Yudkowsky, etc. posit that an "artificial super-intelligence" will be
many times smarter than humans, and will represent a threat somewhat analogous
to an atomic weapon. BUT... consider that the phrase "many times smarter than
humans" may not even mean anything. Of course we don't know one way or the
other, but it seems to me that it's possible that we're already roughly as
intelligent as it's possible to be. Or close enough that being "smarter than
human" does _not_ represent anything analogous to an atomic bomb.

So this might be an interesting topic for research, or at least for the
philosophers: "What's the limit of how 'smart' it's possible to be"? It may be
that there's no possible way to determine that (you don't know what you don't
know and all that) but if there is, it might be enlightening.

~~~
vectorjohn
I think most people didn't really understand the meaning of your comment. They
seem to all equate intelligence and processing speed.

I think it's legitimately an interesting question. As in, it could be
something like Turing completeness. All Turing complete languages are capable
of computing the same things, some are just faster. Maybe there's nothing
beyond our level of understanding, just a more accelerated and accurate
version of it. An AI will think on the same level as us, just faster. In that
case, in that hypothetical, an AI 100x faster than a person is not much better
than 100 people. It won't forget things (that's an assumption, actually), it's
neuron firing or equivalent would be faster, but maybe it won't _really_ be
capable of anything fundamentally different than people.

This is _not_ the same as the difference between chimps and humans. We are
fundamentally on another level. A chimp, or even a million chimps, can never
accomplish what a person can. They will not discover abstract math, write a
book, speak a language.

Mind you, I suspect this is not the case. I suspect that a super intelligent
AI will be able to think of things we can never hope to accomplish.

But it is an interesting question that I think is worth thinking about, rather
than inanely down voting the idea.

~~~
rayalez
Even if that is the case, a person with human-level intelligence, but with
unlimited memory, ability to visualize, internet connection, no need to sleep,
and thinking 100 times faster than a normal person would quickly become pretty
much God to us.

~~~
shpx
I agree, I suspect at this time there's more room for progress to be made by
augmenting human intellect than AI.

Think about how long it takes you to imagine a program vs actually coding it,
or imagining an object you want to create and actually building it, there are
at least two or three orders of magnitude for improvement over the keyboard.

------
pfisch
It is probably going to be the worst decision of humanity to allow AI research
to continue past its current point.

I'm not even sure how we could stop it, but we should really be passing laws
right now about algorithms that operate like a black box where a training
algorithm is used to generate the output. For some reason everyone just thinks
we should rush forward into this not concerned about an AI that is super
human.

Whether it is a good or bad actor doesn't even matter. Giving up control to a
non-human entity is the worst idea humanity has ever had. We will end up in a
zoo either way.

~~~
argonaut
It's current point? Current machine learning algorithms are still incredibly
stupid. We are >>>20 years away from AI.

~~~
pfisch
We are an unknown amount of time away from a true AI.

Right now we are making the building blocks that will make up that AI. We are
very close to AI that can drive tanks and fly weaponized drones. We are very
close to AI that replaces most blue collar jobs and the majority of jobs in
the world really.

If we stop these lines of AI research and technology right now we can probably
make it to the stars while still being a free people. If we make a true AI
whether it is benevolent or not doesn't even matter. Humanity will no longer
be in control of its destiny.

~~~
argonaut
> We are very close to AI that can drive tanks and fly weaponized drones. We
> are very close to AI that replaces most blue collar jobs and the majority of
> jobs in the world really

You know this because you're an expert in the field?

~~~
pfisch
[http://venturebeat.com/2015/06/03/googles-self-driving-
cars-...](http://venturebeat.com/2015/06/03/googles-self-driving-cars-have-
driven-over-1-million-miles/)

[http://www.pcworld.com/article/237005/foxonn_to_rely_more_on...](http://www.pcworld.com/article/237005/foxonn_to_rely_more_on_robots_for_manufacturing.html)

I don't have to be an expert. Anyone can see what is happening.

------
rl3
> _And yet Elon Musk is involved in this project. So are Sam Altman and Peter
> Thiel. So are a bunch of other people whom I know have read Bostrom, are
> deeply concerned about AI risk, and are pretty clued-in._

This is precisely what dumbfounded me about the announcement.

> _My biggest hope is that as usual they are smarter than I am and know
> something I don’t._

It's possible that OpenAI might be a play to attain a more accurate picture of
what constitutes state-of-the-art in the field, effectively robbing the large
tech companies of their advantage—all the while building a robust research
organization that could potentially go dark if necessary.

Admittedly, that also sounds like it could be the plot to a Marvel movie.
Perhaps a simpler explanation is that the details aren't really hashed out
yet, and they're essentially going to figure it out as they go—which would be
congruent with the gist of OpenAI's launch interview.

~~~
blazespin
They think if they arm individuals with AI there will be less of a chance for
an uber AI to overwhelm. Think about the right to bear arms.

They are also probably worried about societal change and the angst everyone is
going to feel as AI starts becoming more commonplace. Where do people (beyond
entertainers and AI programmers) fit in such a world? They don't. People start
to become very irrelevant.

~~~
sawwit
> _People start to become very irrelevant._

Please be precise here and say they will be irrelevant in _economic terms_.
What will be left are things humans otherwise care about: producing art,
consuming art, fun, games, sports, traveling, companionship, partying,
building things, learning new things etc.

I'm looking forward to it, and I don't see a reason why anyone wouldn't.

------
Ono-Sendai
A couple of thoughts on this topic:

* Whether the source code to advanced AI is open may have some importance, but what determines whether some individual or corporation will be able to run advanced AI is whether they can afford the _hardware_. I can download some open-source code and run it on my laptop - but Google has data centres with 10s or 100s of thousands of computers. The big corporations are much more likely to have/control the advanced AI because they have the resources for the needed hardware.

* Soft / hard takeoff - I think a lot of people miss that any 'hard takeoffs' will be limited by the amount of hardware that can be allocated to an AI. Let us imagine that we have created an AI that can reach human level intelligence, and it requires a data centre with 10000 computers to run it. Just because the AI has reached human level intelligence doesn't mean that the AI will magically get smarter and smarter and become 'unto a God' to us. If it wants to get 2x smarter, it will probably require 2x (or more) computers. The exact ratio depends on the equation of 'achieved intelligence' vs hardware requirements, and also on the unknown factor of algorithmic improvements. I think that algorithmic improvements will have diminishing returns. Even if the AI is able to improve its own algorithms by say 2x, it's unlikely that will allow it to transition from human level to 'god-level' AI. I think hardware resources allocated will still be the major factor. So an AI isn't likely to get a lot smarter in a subtle, hidden way, or in an explosive way. More likely it will be something like 'we spent another 100M dollars on our new data centre, and now the AI is 50% smarter!'.

~~~
TeMPOraL
A thought about your second thought: if the AI reaches smart-human-level
intelligence it may _get_ itself the hardware. It could hack or social-
engineer its way into the Internet, start making (or taking) money, and use it
to hire humans to do stuff for it.

~~~
Ono-Sendai
Indeed. Maybe there should be a board of humans that has the final say on if
money should be allocated to hardware for the AI. And they wouldn't be allowed
to do google searches while deciding :)

------
renownedmedia
"If Dr. Good finishes an AI first, we get a good AI which protects human
values. If Dr. Amoral finishes an AI first, we get an AI with no concern for
humans that will probably cut short our future."

AI advanced enough to be "good" or "evil" won't be developed instantaneously,
or by humans alone. We'll need an AI capable of improving itself. I believe
the authors argument falls apart at this point; surely any AI able to evolve
will undoubtedly evolve to the same point, regardless of it being started with
the intention of doing good or evil. Whatever ultra-powerful AI we end up with
is just an inevitability.

~~~
isolate
Why would it undoubtedly evolve to the same point?

~~~
itburnswheniit
I think he's suggesting there would be a critical mass of intelligence, if
there is such a thing. Humans might not survive either transition through a
malevolent AI or a good one.

I guess we'll find out, eh?

------
nickpsecurity
Dabbling in and reading on AI for over a decade makes me laugh at any of these
articles writing about a connection between OpenAI, AI research, and risk of
superintelligence. Let's say we're so far from thinking, human-intelligence
machines that we'll probably see super-intelligence coming long before it's a
threat. And be ready with solutions.

Plus, from what I see, the problem reduces to a form of computer security
against a clever, malicious threat. You contain it, control what it gets to
learn, and only let it interact with the world through a simplified language
or interface that's easy to analyse or monitor for safety. Eliminate the
advantages of its superintelligence outside the intended domain of
application.

That's not easy by any means, amounting to high assurance security against
high-end adversary. Yet, it's a vastly easier problem than beating a
superintelligence in an open-ended way. Eliminate the open-ended part, apply
security engineering knowledge, and win with acceptable effort. I think people
are just making this concept way more difficult than it needs to be.

Biggest risk is some morons in stock trading plug greedy ones into trading
floor with no understanding of long-term disruption potential of clever trades
it tries. We've already seen what damage the simple algorithms can do. People
are already plugging in NLP learning systems. They'll do it with deep
learning, self-aware AI, whatever. Just wait.

~~~
Ironchefpython
> Biggest risk is some morons in stock trading plug greedy ones into trading
> floor with no understanding of long-term disruption potential of clever
> trades it tries.

Actually, it's not the lack of understanding, it's the lack of moral
responsibility.

We've spent the last few centuries transitioning from a society ruled by
strongmen driven by personal aggrandizement to a society where people spend
the majority of their adult life as servants to paperclip maximization
organizations (aka corporations). Much of what you see in the world today,
from the machines that look at you naked at the airport to drones dropping
bombs on the other size of the planet to kill brown people, is a result of
trying to maximize some number on a spreadsheet.

When we install real AI devices into these paperclip maximizing organizations,
you'll have the same problem as you have today with people, except that
machines will be less incompetent, less inclined to feather their own nests,
and more focused on continually rewriting their software with the express goal
of impoverishing every human on the planet to maximize a particular number a
particular balance sheet.

[1]
[https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)

------
mortenjorck
We already have a world awash in superhuman AI; it's just that this AI is at
perhaps the same level of maturity as computers were in the 17th Century. This
AI is of course the corporation: Corporations are effectively human-powered,
superhuman AIs.[1] By crowdsourcing intelligence, they optimize for a wide
variety of goals, their superhuman decision-making running at the pace of
Pascal's mechanical calculator. Yet even the nimblest companies can only move
so fast.

This is to say, even in a hard-takeoff scenario, we would be looking at
something that is still hard-limited by its environment, even if it can
compete with a 1000-person organization's worth of intelligence. The danger
isn't that it somehow takes over the world by itself; the danger is that we
gradually connect it to the same outputs that the decision-making structures
of corporate entities are and it ultimately remakes our world with the very
tools we give it.

Open-sourcing AGI is no more inherently dangerous than open-sourcing any of
the software used to run an enterprise business. It is the choice of what we
ultimately give it responsibility for that should draw our caution.

[1] [http://omniorthogonal.blogspot.com/2013/02/hostile-ai-
youre-...](http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-
soaking-in-it.html)

~~~
sawwit
No. Cooperations are not necessarily like artificial intelligences. They are
cooperations of human intelligences and these two classes of intelligences
have actually very little in common if you look past the similarity that they
are potentially very powerful and intelligent. Cooperations are driven by
material profit, but in the end there is a reasonably large possibility that
they are shaped by human values (because they are run by humans and otherwise
people would also refuse to buy their products). The same cannot be said about
AIs with high certainty.

~~~
deepnet
The comparison is very apt - the first AI's will embody corporate values as
corporations will build and be liable for them.

Likely AIs will be shaped by their builders, if corporations build them, they
will adhere primarily to the profit motive, if humanitarian hackers build them
they will have human values.

Reputedly the Russian Army has built guard robots, they will just be guns on
tanks with a kill radius no values are required - yet are these less moral
than the human controlled drones - at least with an AI it can get stuck in a
corner or logic loop and you may effect an escape - with humans you need a
whistleblower.

Asimov's robot books are informative: his robots are the most moral actors,
obeying their 3 laws, often protecting humans from other human decisions.

Certain corporations dehumanise decisions so while processing occurs in
wetware, the human worker is only a cog and the invisible hand of human values
is removed.

Most workers today could be trivially replaced with a near future neural net.

Of course humans, conspire, complain, unionise, strike, work-to-rule, demand
rights and empathise with their customers - so there is a maximum level of
evil a corporation of humans can rise to - but as history has shown this is an
unacceptably high bar.

The corporate board can make decisions based on human values so long as it
does not go against the rapacious seeking of profit or the CEO will be deposed
by the shareholders.

Once a corporation reaches transnational size, nothing can really stop it or
even get it to pay tax if it doesn't want to.

I think that much of what people actually fear about the AIpocalypse is
exactly the sort of dehumansing powerlessness and machine like cruelty they
already experience from corporations and governments.

You may be speaking to a human who empathises but often one suspects they are
there to sop up your moans not to help you.

An AI is an amplifier of what we already are, in fearing robots we rightly
fear their creator's motives.

~~~
sawwit
> _they will adhere primarily to the profit motive_

You are assuming that there is be an obvious way of doing so, an obvious
solution to the control problem.

------
beat
Should AI be open?

Depends on whether the AI is capable of deciding for itself whether it should
be open or not.

------
itburnswheniit
Maybe en masse we're about as genetically smart as our cultural bias allows us
to become? We keep modifying classic 'natural selection' through social
programs, etc. Great as a cultural 'feel good' and it helps our species to
survive in other ways, but...what we do doesn't favor intelligence.

AI's won't have that emotional baggage.

It will be easier first develop a way of getting around the 'human emotions
problem', then likely leapfrog us entirely at the rate a Pareto curve allows.

I can't think outside my human being-ness, so I have no idea what is going to
happen when something smarter appears on the planet, except to point out there
once were large land animals (ancestors of the giraffe and elephant) on North
America until humans arrived.

My fear-based response screams YES MAKE IT OPEN.

However it shakes out, I think it'll be messy for human beings. We're not
exactly rational in large groups. The early revs of AI (human controlled) will
be used for war.

One has to ask what grows out of that besides better killers?

~~~
SolaceQuantum
This implies that human emotions would be considered a problem by AI's. What
kind of neural network behavior would stimulate the removal of learned
emotions? Assuming we've progressed to the point where an AI can remember the
reasons it learns something, what would be an appropriate reason to remove
learned emotional range?

~~~
itburnswheniit
Problem only in the sense that it's an "instability" in humans recognized by a
sufficiently advanced AI. Instead of needing to evolve to the point of
understanding emotions, all it has to understand is how to get around when
humans are being irrational.

I suspect emotional range may be the last thing to develop because it's not
technically needed to evolve past the point of human intelligence.

------
tunesmith
I have one basic question on friendly AI - suppose we work and work and
eventually figure out how to code in a friendly value system in a foolproof
way, given any definition of "friendly". Great. But given that ability, how do
you even define what "friendly" or "good" is?

As a layman, I so far can only see it in terms of basic philosophy and
normative ethics. By definition, a friendly AI is one that doesn't merely deal
with facts, but also with "should" statements.

Hume's Guillotine says you can't derive an ought statement from is statements
alone. Some folks like Sam Harris disagree but they're really just making
strenuous arguments that certain moral axioms should be universally accepted.

Münchhausen Trilemma says that when asking why, in this case why something
should or should not be done, you've only got three choices - keep asking why
forever, resort to circular reasoning, or eventually rely on axioms. In this
case, moral axioms or value statements.

So it seems like any friendly AI is going to have to rely on moral axioms in
some sense. But how do you even define what they are? Normative ethics is
generally seen to have three branches. For consequentialism (like
utilitarianism) you make your decision based on its probable outcome, using
some utility function. For deontology, you rely on hardcoded rules. For value
ethics, you make decisions based off of whether they align with your own self-
perception of being a good person.

But all three have flaws - in consequentialism, it's like putting on blinders
to other system effects, and the proposed actions are often deeply unsettling
(like pushing a guy off a bridge to block a trolley from killing three
others). In deontology and value ethics, actions and the principles they are
derived from can be deeply at odds - whether it's hypocrisy in deontology or
"road to hell being paved by good intentions" in value ethics. In general,
deeply counterintuitive effects can be derived from simple principles, as
anyone familiar with systems dynamics knows.

But even beyond that, even if we had a reasonable, consistent AI controlled by
solid values, and even if the people judging the AI could _accept_ the
conclusions/actions that the AIs derive from those values, how would we ever
get consensus on what those values should be? For instance, even in our
community there's a fair amount of disagreement among these basic root-level
utility functions:

\- Maximize current life (people alive today), like Bill Gates believes. \-
Maximize future life (survival of species) \- Maximize health of planet

etc, etc - those utility functions lead to different "should" conclusions,
often in surprising ways.

~~~
rayalez
Well, you could have a utility function "do what humans tell you"....

------
clickok
We just had a series of debates/discussions on this topic at my university,
the results of which were pretty inconclusive. There are just too many
possible scenarios which seem to require different responses, and in most
cases to provide those responses is to answer philosophical questions that
have been around for millennia.

The strategies for mitigating risk seem to be: ensure that the AIs are
controllable; avoid situations where there is a single AI (whether controlled
or uncontrolled) that is too powerful; and ensure that the AI's goals are
broadly acceptable to humankind.

The first and the third objectives are extremely difficult, not just
technically, but even from a conceptual standpoint[1]. The second strategy is
reasonable, because even if a superhuman intelligence were somehow well
controlled, depending on who controls it the outcomes could vary
significantly. So perhaps the best thing we can hope for is something similar
to society's current status quo-- lots of power concentrated in few hands[2],
but without one single (person|corporation|government) being so dominant as to
be able to act in opposition to all others.

I am not confident that we will ever be able to produce a provably safe AI, or
that we could get even a large majority of the world's population to agree on
what a "good AI" might do without devolving into ineffectual generalities[3].
Supposing that resolving these questions is not prima facie impossible, it's
not like retarding AI development comes without cost-- just about every facet
of our lives can be improved via AI, and so in the years, decades, or
centuries between when superhuman machine intelligence is theoretically
achievable and the time when we collectively agree we can implement it safely,
how many billions will suffer or die from things that we could've solved via
AI[4]?

On the whole, OpenAI sounds like a good idea. Making research broadly
available helps avoid catastrophic "singleton" like futures, while
accelerating the progress we make in the present. In addition, if there's
every an AI SDK with effective methods of improving how "safe" a given AI is,
most researchers would likely incorporate that into their work. It might not
be "proven safe", but if there was a means to shut down a runaway process, or
stop it from spreading to the Internet, or alert someone when it starts
constructing androids shaped like Austrian bodybuilders, that would be handy.
Responsible researchers should be doing this already, but as Scott points out
the ones we should be worried about aren't responsible researchers. Open AI
development is in harmony with safe AI development, at least in some respects.

\------

1\. I have a significantly longer response that I scrapped because it might
ultimately be better suited as a blog post or some such.

2\. That's why it's called a power law distribution. Well, no, that's not it
at all, but it seemed like a funny, flippant thing to say.

3\. A universally beloved AI might be the equivalent of a Chinese Room where
regardless of what message you send it, it responds with a vaguely
complimentary yet motivational apothegm.

4\. Bostrom tends to counterbalance this by arguing how much of our light cone
(the "cosmic endowment") we might lose out on if we end up going extinct, due
to, e.g., superhuman machine intelligence. Certainly "all of configurations of
spacetime reachable from this point" outweighs the suffering of mere billions
of people by some evaluations, but I ask myself "how much do I care about
people thousands or millions of years into the future?", and also "if these
guys have such a good handle on what constitutes the 'right' utility function,
why haven't they shared it?". A more sarcastic variation of the above might be
to remark that if they're able to approximate what people want with such high
fidelity that they feel comfortable performing relativistic path integration
over possible futures, then superintelligence is already here.

\------

------
ultim8k
Yes! Everything that can push humanity forward, should be open!

------
js8
Fear of superintelligence is just another in series of technological scares,
after grey goo and cloning. There may be an explanation why Musk and Thiel
indulge in this, they sincerely believe that smart rule (or at least can rule)
the world.

But nothing is further from the truth. Humans are optimized to be cunning to
get positions of power in human society. AI won't be optimized in that way,
therefore, it's probably going to lose for a long time. So evil AI will
probably be like incredibly annoying autistic psychopath child, who cannot
comprehend human institutions so his evil plans are totally obvious.

It's like with grey goo - biological systems like bacteria are heavily
optimized to survive in very uncertain conditions, and any potential grey goo
has to deal with that.

I think humanity is currently to blow themselves up via global warming, so
superintelligence is not really a comparable threat to humanity. If anything,
bigger threat is that we won't listen enough to superintelligence. In fact, I
think friendly AI will be something like Noam Chomsky - totally rational,
right most of the time, fighting for it, telling us what should be done
disregarding our emotions. Many people find this annoying, too (including me
and many very smart people).

Finally, if the hypothesis about superintelligence is right, why would
superintelligence want to evolve itself further? It would be potentially
beaten by the improved machine, too.

