
How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over - sergeant3
https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a
======
cromwellian
The near term danger of AI isn't in hyperintelligent SkyNet like systems, it's
in human controlled autonomous weapon systems and "stupid AI"

What you should be fearing is military drones being given the ability to make
decisions on targets or to fire, even with human assistance, and these systems
won't just reside in the hands of large governments either.

Already police and militaries around the world are using abstracted forms of
force, wherein target are identified with algorithms, and then force trained
on those targets

What's do you think is going to happen first, SkyNet, or a predator imaging
drone telling a human operator falsely that the current image is a terrorist?

What's going to happen first? SkyNet, or self driving cars putting millions of
people out of jobs because of a lack of demand for drivers in transportation,
or manufacture of cars? (I'm not saying it's a bad thing, but it will be very
very disruptive)

If SkyNet is a threat, it's 50 or 100 years off I think. "AI" as it is now, is
no where near the capability people are talking about. It's sheer hyperbole.

~~~
vlehto
Luckily there are physical limitations. There are practically two ways to arm
a drone. Bombs and guns.

A small flying drone operating an 9mm pistol has extremely poor accuracy.
Especially with multiple shots. If you increase the accuracy, you have to
increase the mass. Which makes it bigger target. Currently regular guy with
little training and a rifle can probably shoot down a gun wielding drone most
of the time before the opposite can happen.

[https://www.youtube.com/watch?v=xqHrTtvFFIs](https://www.youtube.com/watch?v=xqHrTtvFFIs)

Then there are drones with bombs attached to them. One of the most
sophisticated is called Hellfire missile. But it's needlessly big and
expensive. The quad copter in next video costs 100$ and has payload to carry
regular hand grenade.

[https://www.youtube.com/watch?v=Lb2Tpp3CIoY](https://www.youtube.com/watch?v=Lb2Tpp3CIoY)

You could go RC car route. But you again need a bit of size to conquer stairs.
Also it's easier to track where it comes & goes.

[https://www.youtube.com/watch?v=kUXRMDK3r7s](https://www.youtube.com/watch?v=kUXRMDK3r7s)

I'm less worried about government doing bad shit and more worried about
private citizens getting nasty. Autonomous flight, GPS navigation, address
from google maps and bit of facial recognition. First civilian drone murder is
just matter of time. People kill with more ease the further they can be when
it's done.

I'm guessing this internet privacy thing will seem like child's play when all
of sudden everybody wants to hide their physical address. Once that is
handled, it's only bit like having rabid dogs with homing device.

~~~
bootload
_" Luckily there are physical limitations.... I'm less worried about
government doing bad shit and more worried about private citizens getting
nasty."_

The MSF Trauma Hospital in Kunduz, Afghanistan (3rd October) didn't have this
luxury and in the _fog of war_ , [0] 12 MSF medical staff and ten patients
were killed. [1],[2]

Killing by remote sensing, is indiscriminate. Adding AI is another order of
stupidity.

References:

[0] [http://arstechnica.com/information-technology/2015/11/how-
te...](http://arstechnica.com/information-technology/2015/11/how-tech-fails-
led-to-air-force-strike-on-msfs-kunduz-hospital/)

[1] [http://www.msf.org.uk/article/in-memoriam-msf-colleagues-
kil...](http://www.msf.org.uk/article/in-memoriam-msf-colleagues-killed-in-
the-kunduz-trauma-centre-attack)

[2] [http://www.theguardian.com/us-news/2015/oct/06/doctors-
witho...](http://www.theguardian.com/us-news/2015/oct/06/doctors-without-
borders-airstrike-afghanistan-us-account-changes-again)

~~~
sailfast
Not sure that it matters for the sake of argument, but the weapon system used
in question was a human-piloted, human-manned AC-130 gunship, not a drone /
remote platform

~~~
bootload
_" the weapon system used in question was a human-piloted, human-manned AC-130
gunship"_

True, read the Ars article: _" The BMC crew is responsible for steering the
aircraft to targets, identifying them, and shooting them; the aircraft's
battery is slaved to the sensor suite for targeting."_ \- Aircrew, well
trained, professional. As ethical as you can get.

Now what happened is _" A US special operations team on the ground, given
coordinates of the Afghani NDS building by the Afghan forces they were working
with, passed them to the AC-130. But when the AC-130 crew punched the grid
coordinates into their targeting system, it aimed at an open field 300 meters
away from the actual target. Working from a rough description of the building
provided from the ground, the sensor operators found a building close to the
field that they believed was the target. Tragically, it was actually the
hospital."_

Will adding AI to this situation make any improvement when the real issue is
_' lack of a "common operating picture,"'_?

~~~
sailfast
No, you're right. The only thing that may have happened was lack of
connectivity to COP and no person in the loop to authorize, resulting in a
stand-down of operation and no ordnance on target. Great if it's a hospital,
not so great if it's actually a bunch of enemy that you need to remove to save
your soldiers on the ground. Tough, tough thing to get right all the time. I
think a measured, hybrid approach will probably get you the best outcome but
I'm no expert in this field.

~~~
bootload
just gets muddier and muddier ~
[https://www.washingtonpost.com/news/checkpoint/wp/2015/12/11...](https://www.washingtonpost.com/news/checkpoint/wp/2015/12/11/months-
after-the-airstrike-on-msfs-hospital-in-kunduz-what-exactly-happened-is-still-
unclear/)

~~~
sailfast
Total aside, but the United States should really promote some sort of beacon /
FLIR-readable indicator for hospitals. Their existence would have to be kept a
quiet so there weren't a bunch of false positives floating around, but I would
think the cost is minimal for the returned value.

------
chubot
I wonder what their use cases are. "Advance the state of the art in AI" is
just too nebulous. Having the smartest people in the world isn't enough... you
need to focus them on some goal.

Once you get beyond 8 researchers, you'll have problems with politics and egos
if people aren't focused on a problem. Everyone will have their pet approach
for specific problems, and they won't compose into something generally useful.
AI is really like 10 or 20 different subfields (image understanding, language
understanding, motion planning, etc.)

I think self-driving cars are a perfect example of a great problem for AI (and
something that many organizations are interested in: Google, Tesla, Apple).
Solving that problem will surely advance the state of art in the AI (and
already has).

tl;dr "OpenAI" is too nebulous.

------
api
Before you invite regulation into this area, take a long hard look at how the
government has historically approached cryptography and IT security. This is a
far simpler domain with far simpler concepts, and it's a total shitshow that
alternates between ignorant security theater and self-serving power grabs.

Get into bed with the government and they will piss in it. The most likely
outcome is costly complicated regulations that hobble legitimate development
and accomplish nothing in terms of making us safer from anything. The end
result will be like ITAR and crypto export controls: pushing development off
shore and making the USA less competitive.

I say this not as a hard-line anti-government right-winger or dogmatic
libertarian, but as someone who has a realistic view of government competence
in highly technical domains. Look at other areas and you don't see much
better. Corn ethanol, for example, is literally the _pessimum_ choice for
biofuels-- it is technically the worst possible candidate that could have been
chosen to push. The sorts of folks who ascend to political power simply lack
any expertise in these areas, and so they fall back on listening to the
agenda-driven lobbyists that swarm around them like flies. The results are
awful. Government should do government but should stay the hell away from
_specific micromanagement_ of anything technical.

~~~
astrofinch
Yes, and another issue with regulation is that it gives a speed advantage to
any team that's willing to subvert the regulations:
[http://www.brookings.edu/blogs/techtank/posts/2015/04/14-und...](http://www.brookings.edu/blogs/techtank/posts/2015/04/14-understanding-
artificial-intelligence)

If regulations _do_ turn out to be the right path, I'd suggest that people
within the AI field form their own informal regulatory body first, fortifying
it against institutional failure modes like corruption by lobbyists etc. Then
get the government to grant them legal authority. Hopefully that would go a
ways towards addressing the issues you describe.

------
heurist
Sounds to me like YCombinator wants to fuel their growth by creating these
tools for their companies to use. The YC business model is absolutely
brilliant, but I can't see this being some purely altruistic mission. Or if it
is then I am jealous that they have the power and resources to put up against
such a project. I'd spend all my time trying to build an AI if I didn't have
to work! (Though I am trying to steer the company I work for in that direction
anyway...)

~~~
davnicwil
Well, yes, but in the broadest possible sense, as in the rising water level
will raise all boats, including of course the YC ones, since all the work
coming out of this group will apparently be open sourced and freely available
for anyone to use, YC company or otherwise.

From the article:

> Sam, since OpenAI will initially be in the YC office, will your startups
> have access to the OpenAI work?

> Altman: If OpenAI develops really great technology and anyone can use it for
> free, that will benefit any technology company. But no more so than that.

~~~
heurist
That's just enough for the YC social network to incorporate OpenAI
researchers, and Sam knows it. YC folks will be at the water coolers and ping
pong tables with OpenAI folks, and once the OpenAI folks move into another
office they'll still be in frequent communication with their YC social
contacts, if not through official channels then unofficially as friends. That
means they'll hear the researchers' in-progress ideas and details on how the
platform works or will work long before that information spreads to the
general public. I'd consider that a huge advantage, assuming OpenAI makes
sufficient progress.

~~~
davnicwil
Well you're probably right, but I don't think that you can do too much,
realistically, to regulate early access to very fresh ideas through networks
like that - and I think that's fine so long as there's no purposeful or
unnecessary delay in spreading the information more widely.

You could for instance make the same kind of point about YC companies' access
to investor networks, advice of partners, all those sorts of things which
aren't explicitly reserved just for them but of course are more readily
accessible by virtue of being in the program. It's not something that's
inherently bad, it's just how it works.

I'm not saying that having very close contact with this research group won't
be advantageous to YC companies, of course it will, but with that as a given,
the ethos of this group's findings being made open and freely available for
anyone to use is well-intentioned and to be applauded, when it's a privately
funded initiative that could just as easily be justified in being somewhat or
completely closed and proprietary. Is it really important if some YC companies
happen to have a slight advantage from this, as an inevitable side effect, in
the big picture?

~~~
heurist
>Well you're probably right, but I don't think that you can do too much,
realistically, to regulate early access to very fresh ideas through networks
like that - and I think that's fine so long as there's no purposeful or
unnecessary delay in spreading the information more widely.

Keep the team distributed across the world and make all communication
surrounding the projects open as well. If it's for the world it should be by
the world.

>You could for instance make the same kind of point about YC companies' access
to investor networks, advice of partners, all those sorts of things which
aren't explicitly reserved just for them but of course are more readily
accessible by virtue of being in the program. It's not something that's
inherently bad, it's just how it works.

I wouldn't argue that, that's just how business works. I would argue that the
founders are playing OpenAI up as humanitarian aid when really it
disproportionately benefits them (autonomous cars, paid for by research
grants? Investment in early adopters of the technology? Uh yeah).

>I'm not saying that having very close contact with this research group won't
be advantageous to YC companies, of course it will, but with that as a given,
the ethos of this group's findings being made open and freely available for
anyone to use is well-intentioned and to be applauded, when it's a privately
funded initiative that could just as easily be justified in being somewhat or
completely closed and proprietary. Is it really important if some YC companies
happen to have a slight advantage from this, as an inevitable side effect, in
the big picture?

YC's business is growing businesses, and they'll take any advantage they can
get. If it benefits them more at all then it's not charity or non-profit, and
they shouldn't be billing it as such.

------
ddod
Whether it's the "singularity" or just software naturally improving over time
and taking on more "thinking" work, there's going to be a huge and
insurmountable unemployment problem in the near future. The market values
human thought/labor to the extent that it's cheaper or more effective than an
automated solution. When that isn't the case, you can fill in the blanks.
That, to me, is the scary part of AI.

It doesn't sound like this project has any scope to address this practical
concern, which to me, is largely economic. I don't see how universal access to
AI puts food on the table.

~~~
henrikschroder
There are many dystopic future possibilities as automation eats jobs.

There's also a few positive ones, and I hope we can move towards them. One way
would be to shift from taxation of human labour to taxation of the means of
production. Another way is if access to quality of life products becomes so
cheap that they require very little labour to earn.

If you extrapolate the progress of solar power, 3D printing, and synthetic
meat, you can imagine a machine that is cheap to produce, but which would make
each human completely self-sustainable. Not needing to work to put food on the
table every single goddamn day would transform our society quite a lot in a
positive way.

~~~
bad_user
I understand your angle, but relying on 3D printing and synthetic meat,
probably from petroleum derivatives because 3D printing can't happen from thin
air, in order to feed humans ... dude, to me that doesn't sound like a life
that's worth living.

~~~
henrikschroder
Thin air contains all the carbon that plants get their bulk mass from...

Extrapolate further, imagine a machine that runs on solar power, and creates
whatever food you want from water, carbon dioxide, and human poop. Essentially
short-circuit the whole raise-crops-feed-cattle-slaughter-get-meat cycle. Make
the food out of the machine perfectly nutricious as well, because why not.

There would still be things to strive for, to work for, if you want. But
baseline survival is just taken care of. Sounds like a good future to me.

~~~
bad_user
In combination with sun light, the carbon taken from air provides the energy
that plants need to grow, however plants also need minerals from a healthy
soil.

When it comes to food, baseline survival is already taken care of in western
countries and we are wasting about one third of the food we produce. It's not
food that's the problem, but living space and forever rising health care
costs.

But you know what the irony is? We don't know a thing about what constitutes a
nutritious healthy diet, as the reductionist science we've been applying is
not up for this task. Even more aggravating is that trying to shorten the
"raise-crops-feed-cattle-slaughter-get-meat" cycle and do it on an industrial
scale (by means of replacing sun's energy with fossil fuels and do it in
concentrated operations) is precisely the root cause of many of the problems
we find ourselves into.

As meddling with the things we ingest has given us the modern day diseases
such as cancer, diabetes, obesity and heart disease, not to mention that we're
on the brink of going back to the dark ages due to the upcoming "antibiotics
apocalypse".

And yet here you are, hoping that some future 3D printer will synthesize meat
out of thin air, instead of fixing the real problems in our society, which is
that we consume and waste too much from processes that aren't sustainable. But
yeah, 3D printing will save us, seemed to work for Star Trek characters at
least. Good luck with that mate.

------
rpm33
Once again sensationalism. I watched that interview. Sam's take on AI is
perhaps the most practical I've seen in popular media, while everyone is
freaking about a singularity event.

~~~
fiatmoney
"Sam's take on AI" being that all machine learning research should be
intensively monitored and controlled by the government, or has he backed away
from that position?

[http://blog.samaltman.com/machine-intelligence-
part-2](http://blog.samaltman.com/machine-intelligence-part-2)

------
MBlume
This...sounds incredibly naive? They seem to think that AI risk comes from Bad
People doing AI? There's not one mention given to the possibility of well-
intentioned people destroying the world by accident.

------
Artoemius
I think the best defense against the misuse of nuclear weapons is to empower
as many people as possible to have nuclear weapons. If everyone has nuclear
weapons powers, then there’s not any one person or a small set of individuals
who can have nuclear weapons superpower.

Yeah, right.

------
amai
This initiative makes no sense. If AI could really become a technology which
would endanger our civilization, when you clearly would not(!) want it to be
useable for everyone. There is a reason why it is not the right of every
american citizen to own plutonium, buy sarin or send letters full of anthrax
around.

But if AI would be as dangerous to society as for example cars, then we don't
need such an initiative. So for me the whole thing seems to be a marketing
stunt of sleep-deprived billionaires who read the wrong books.

~~~
astrofinch
Yes, I would like to see the logic of Sam & Elon's discussions that caused
them to opt for this route. Let's have them post it on the internet and give
everyone an opportunity to shoot holes in it. (I hope I'd have the guts to do
this if I was a billionaire.)

------
gist
Musk is spread so so thin. Not enough to run a rocket company, a car company
and be the Chairman of Solar City. Needs to have his hand in even more pots.

~~~
tachyonbeam
I remember seeing him give some interviews a few months back where he looked
quite tired and overworked. I hope he has the wisdom to take a break when he
needs one.

~~~
gist
Agree but I don't know that he will (since the way it seems to me) behavior
like this is similar to an addiction in some ways. You get so much positive
feedback as well as ups and downs he would have to hit a tipping point (or
life changing event) to actually change the behavior.

------
martin_
> Altman: Our recruiting is going pretty well so far. One thing that really
> appeals to researchers is freedom and openness and the ability to share what
> they’re working on, which at any of the industrial labs you don’t have to
> the same degree.

Do current AAPL/GOOG/FB engineers dislike this so much? There's secrecy within
most for-profit entities, what makes AI so different?

~~~
rntz
> There's secrecy within most for-profit entities, what makes AI so different?

It's not about AI vs other fields, it's about research vs. engineering.
Admittedly, that line is more blurry than most, especially in highly
technologically competitive fields like AI and graphics (and less blurry in
fields where the research-to-implementation gap is larger, like PL or
algorithmic complexity).

------
walterbell
There is a role for regulation and collective choice. Take HFT as an example.
If everyone competes for lower latency access, eventually everyone locates
their bot at the same exchange. If everyone puts a lower limit on latency
(e.g. by coiling fiber optic cable), the latency playing field is leveled and
new areas of competition emerge.

Open technology will empower the expression of many human wills, individual
and collective. Human wills are today constrained and empowered by many human-
imagined systems of thought, and we can invent new ones. Will there be an AI
which explores the possibility space of constraints on AI-using humans?

------
danieltillett
My real concern about AI is that generalised Moore’s law means we only have a
relatively short time to plan. Assuming that computers continue to double in
processing power every 18 months or so, then we only have 10 years where we go
from 1% human level to human level. This really is not a long time to make
good decisions.

~~~
daveguy
A few things about AI that the singularity crowd either doesn't get or doesn't
want to get:

1) Growth rates in nature are never exponential. They are sigmoidal. Sigmoids
look very exponential when you're in the middle of them, but we are starting
to see the level of moore's law (yeah yeah, it's technology not ICs, still
sigmoidal)

2) Even if we had a computer that was 1,000 times faster than the ones we have
today and used 1/1,000 of the computing power we STILL don't have the
algorithms to produce a human intelligence and that is one hell of an
algorithm.

3) The focus for a long time has been moore's law and the associated increase
in FLOPS. I think what is more important and more limiting is the bandwidth
and bandwidth is a couple orders of magnitude lower than FLOPS where FLOP=Byte
and an extra order of magnitude lower when FLOP=Word.

~~~
lolyololol
It isn't too hard to be off an order or two of magnitude when dealing with
exponential increases, no reason advances could or couldn't level off at 1% or
1000% of human capacity.

~~~
danieltillett
I am not sure that just hoping computation growth will level off at some
threshold below human level is wise planning.It is like assuming that a
singularity level intelligence will be nice and so saying we don’t need to
think about the consequences.

------
bkbaba
Musk: I think the best defense against the misuse of AI is to empower as many
people as possible to have AI. If everyone has AI powers, then there’s not any
one person or a small set of individuals who can have AI superpower.

???!!!

Isn't this like gun control all over again?! You give more guns to people so
that they can be safe, instead you end up killing each other.

~~~
dillchen
Why is guns the only technological comparison? Replace guns in your statement
with mobile phones or access to the internet.

------
humanfromearth
It might be an incredibly bad idea to have multiple AGIs everywhere in the
world, but that's the least bad that I can see too.

Also this is amazing, making serious effort towards AGI is what we need. We'll
play with RNNs configurations for a long time, but I think it's a good call to
fund people who think about the broad picture.

~~~
joe_the_user
I think that depends on how you want to model the danger of AGIs. Nukes are
dangerous but the least dangerous version of nukes doesn't seem like nukes
"everywhere in the world".

Of course, as a pure hypothetical, it's virtually impossible to come up with a
definite danger-model for AGIs.

------
arbre
This is great news. I work at a big company with advanced machine learning
tools and infrastructure. Everytime I use them I am amazed by the tools but
kinda sad for the students and researchers who have to deal with simpler/less
powerful tools. This gives me hope that the best tools will eventually be open
source.

------
cromwellian
Remember, when Engines Of Creation and Nanosystems were published, and there
was a great fear that uncontrolled Nanotech development would result in a
GreyGoo that would consume us all?

With stuff like CRISPR, perhaps Elon should invest to stop the zombie
apocalypse. :)

------
axplusb
As awesome as this looks, I'm totally missing the point.

If they truly believe AI is dangerous, how does promoting / accelerating it is
supposed to help?

Or is it a way to commoditize R&D in machine learning so that it will never be
a bottleneck for startups?

------
tinalumfoil
It's incredible the types of doomsday scenarios the wealthy invest in
stopping. The problem Elon Musk and Y Combinator are going to solve with their
money, what they will be remembered for fixing after their companies have long
crashed and gone bankrupt, is better technology. Essentially, technology will
become so good at doing human's work we will run out of problems for people to
solve and drift into a lazy non-working state incompatible with current
economies. I predict Earth will be destroyed by passing meteor before that
happens.

Maybe if I was a billionaire I'd understand.

------
astrofinch
"Security through secrecy on technology has just not worked very often."

Nuclear weapons come to mind. Would we prefer that the knowledge of how to
make them be more widespread?

~~~
Rapzid
That knowledge is very widespread.

------
thallukrish
One example of a evil AI is the fact that the power of all the data out there
about people,things and the ability to learn from it allows connections to be
made like never before. With the right algorithms it should be possible to
even make policy shifts that can impact human lives in a manner that is
favourable to the 'super power' Govt. or company who owns this.

------
ThomPete
Humans are general purpose animals and technology is general purpose
humankind.

If we believe that DNA is a kind of information and our genes are "looking
for" better weasels to survive through then it's only natural to also see
technology as a much better carrier of that information than us.

The problem many have with coming to grasp with the idea that AI could be a
threat is because they look at where technology is right now and then try and
imagine a computer being anywhere near our capabilities.

But this is because many think of it as a thing. As in. "Now we have finally
build a strong AI thingiemagick". However just as humans consciousness and
intelligence isn't a thing, neither will AI be. It's going to be a lot of
things. Some are better developed than others, but most moving at impressive
speed and at one point enough of them are going to be put together to create
some sort of pattern recognizing feedback loop with enough memory and enough
smart sub-algorithms to became what we would consider sentient. </tinfoil hat>

------
hollerith
The OP seems to assume that the big danger with AI is that it will leave the
people at the mercy of an (human) elite that controls an AI or that has
programmed an autonomous AI (an AI not controlled by any humans) to care
mostly or only about the elite.

In contrast, what organizations like the Machine Intelligence Research
Institute and the Future of Humanity Institute (MIRI and FHI) consider the
main danger (and have considered the main danger for over 11 years) is that
the AI will not care about any person at all.

For the AI to do an adequate job of protecting human welfare it needs to
understand human morality, human values and human preferences -- and to be
programmed correctly to care about those things. Designing an AI that can do
that is probably significantly more difficult than designing an AI that is so
intelligent that the human race cannot stop it or shut it down (although
everyone grants that designing an AI that cannot be stopped or shut down by,
e.g., the US military is in itself a difficult task).

The big danger in other words seems to come not from a research group using AI
research to try to take over the world or to gain a persistent advantage over
other people, but rather from a research group that means well or at least has
no intention to be reckless or to destroy the human race, but ends up doing so
by having an insufficient appreciation of the technical and scientific
challenges around protecting human welfare, then building an AI that is so
smart that it cannot be stopped by humans (including the humans in the other
AI research groups).

I fail to see how changing the AI-research landscape so that more of the
results of AI research will be published helps against that danger. If one
team has 100% of the knowledge and other resources that it needs to build a
smarter-than-human AI (and has the will to build it) and all the other teams
have 99.9% of the necessary knowledge, there might not be enough time to stop
the first team or (more critically IMHO) to stop the AI created by the first
team. In particular, if the first AI is able to build (e.g., write the source
code for) its own successor -- a process that has been called recursive self-
improvement -- it might rapidly become smart enough to stop any other smarter-
than-human AI from being built (e.g., by killing all the humans).

Rather than funding a non-profit that will give away its research output to
all research groups, a better strategy is to give the funds to MIRI who for
over 11 years have been exhibiting in their writings an vivid appreciation for
the difficulty of creating smarter-than-human AI that will actually care about
the humans rather than simply killing them because they might interfere with
the AI's goal or because the habitat and the resources of the humans can be
repurposed by the AI.

Any effective AI -- or any AI at all really -- will have some goal (or some
set or system of goals, which for brevity I will refer to as "the goal") which
may or may not be the goal that the builders of the AI _tried_ to give it. In
other words, everything worthy of the name "mind", "intelligence" or
"intelligent agent" has some goal -- by definition. If the AI is powerful
enough -- in other words, if the AI is efficient enough at optimizing the
world to conform to the AI's goal -- then all humans will die -- at least for
the vast majority of possible goals one could put into a sufficiently powerful
optimizing process (i.e., into an sufficiently powerful AI). Only a very few,
relatively complicated goals do not have the unfortunate property that all the
humans die if the goal is pursued efficiently enough -- and learning how to
define such goals and to ensure that they are integrated correctly into the AI
is probably the most difficult part of getting smarter-than-human AI right.

That used to be called Friendliness problem and is currently usually called
the AI goal alignment problem. The best strategy on publication is probably to
publish freely any knowledge about the AI goal alignment problem, while
keeping unpublished most other knowledge useful for creating a smart-than-
human AI.

I will patiently reply to all emails on this topic. (Address in my profile.) I
do not get a salary from FHI or MIRI and donating to FHI or MIRI does not
benefit me in any way except by decreasing the probability that my descendants
will be killed by an AI.

------
melling
While we're at it maybe we should address the possibility of overpopulation on
Mars?

Andrew Ng thinks people are wasting their time with evil AI:

[https://youtu.be/qP9TOX8T-kI?t=1h2m45s](https://youtu.be/qP9TOX8T-kI?t=1h2m45s)

~~~
DanBC
> While we're at it maybe we should address the possibility of overpopulation
> on Mars?

We've already fucked this planet so I sincerely hope a few people are thinking
of ways to avoid fucking another one.

~~~
function_seven
I can't imagine we could make Mars _more_ inhospitable than it already is.
And, whatever technologies we'd have to develop to live on that planet, would
forever be in our toolkit to reverse the damage we've done here, and prevent
future damage there.

------
mocookie
This is starting to sound like the OSAF that tried to build a cross-platform
open-source email/calendar/notes application for the betterment of the world -
in competition with Microsoft and whatever other large corporations were doing
PIMs at the time.

[https://en.wikipedia.org/wiki/Open_Source_Applications_Found...](https://en.wikipedia.org/wiki/Open_Source_Applications_Foundation)

~~~
mocookie
Admittedly, this parallel is based mostly off reading the biography of the
project: Dreaming in Code.

------
lomnakkus
(Sorry, this is a bit rambling. Hopefully it'll still be interesting to some
of you. Have had a few pints at this point...)

EDIT: Actually, this is nearing "crazy" levels. Just ignore unless you really
enjoy stream-of-consciousness. Sorry about this, HN! :)

I know I'm _really_ late to the party here, but there's a premise in this
whole discussion that I'm not sure I understand.

Why _should_ we prevent AI from taking over? I mean, I "get it"... it wouldn't
be _HI_ and that _feels_ kind of weird, but what's objectively special about
HI? Why are we treating "HI==good" as axiomatic? I mean even us tribal,
overly-emotional (&c) humans value _DI_ (Dog Intelligence) even if we're
pretty sure that it can't contemplate the fact that we're all made of the
remnants of supernovae. There's no evidence as of yet that a greater
intelligence a) exists[1], even in principle, or b) would be any less
benevolent towards us. Perhaps they would even create nice little simulations
for us to exist in. Though, I wonder what the purpose of _my_ simulation is,
given current circumstances :).

Yes, a transition from HI->AI would inevitably lead to a lot of human death
(unless we're talking _really_ out-there take-over plans involving disease and
such), but would AI really be _worse_? And for _whom_ and _why_? Humans
themselves have caused a lot of death and we seem to value ourselves pretty
highly overall (and undeservedly, IMO).

It might be that HI is the "end of the road" just like the Turing Machine
appears to be the end of the road in terms of what you _can_ compute... but
not in terms of _how fast_ you can compute it. Would "faster" automatically
mean "better" (see footnote)? I dunno.

[1] The existence of a "higher" ("faster" is probable) intellige is a
interesting question. How would you judge such a thing? Is there more
"power"[2] to be gained through something other than being able to reflect at
yourself? AFAIUI self-reflection is one of _the_ distinuguishing features of
intelligence, but given that we're "better" than Chimps -- who have an idea of
"self", thus self-reflection -- it may not be the decider. And even so, such
reflection is still subject to Physics and thus without "free will".

[2] Not just faster, but "better", in some non-linear sense.

------
keithwhor
So, capitalism is all about exploiting unfair advantages, right? First mover's
advantage on AI developments (regardless of whether they're made publicly
accessible eventually or not) seems like a pretty big unfair advantage.

Good for them. I expect some great work to come out of this. :) I'm most
excited to automate travel as quickly as possible --- too many people die each
year from automobile accidents.

------
Teodolfo
It is really sad that the people funding these excellent researchers have no
fucking clue about AI (they know plenty about other things and have done
plenty of good work, but the level of nonsense here is striking). Thank God
they are giving the money to competent people instead of doing what they think
they are doing with it.

------
oneJob
Here is my prediction, after watching "Terminator Genisys". The dangerous AI
is not the AI humans "invent". It is the AI that runs away. But, what AI will
humans allow to "run away"? What scenario presents an opportunity for one
group of humans to attack an AI and then provides the opportunity for said AI
to be released from its lease ("run away") and become that which we all fear?

So, you have 'red team' and 'blue team'. Blue team is super rich and builds
itself an awesome AI. Red team needs some "rally round the flag" pick me up
and so, looking around for targets, decides that attacking a bunch of machines
is a safe bet. If they win, awesome. If not, then they didn't kill any
persons, just made a bunch of junk.

Blue team's response is to internalize the threat (as is only natural, or is
at least politically expedient to some subset of blue team) and frame the
situation as follows: "This is what we built our AI for. This is an
existential threat. It has the capacity. We only need to let it off the leash.
The choices are 'destroy' or 'be destroyed'. This is nothing less than an
existential moment for our civilization."

And, with that horrible, non-technical, propaganda riddled rationalization the
AI developed by the most well meaning of people will be let off the least,
will run way, and nothing that we know about the AI up to that point will be
worth diddly squat.

I respect anyone that tries to tackle this issue. But, the nature of the
issue, the kernel of the problem, is nothing less than Pandora's box. We won't
know when it is opened. But, the AI will.

------
Animats
The near term danger of AI, as I point occasionally, is a Goldman Sachs run by
an AI. Machine learning systems are already making many investment decisions.
We're not far from the day when society's capital allocation is controlled by
programs optimizing for return over a few years, and nothing else.

------
xiaoma
Is the Steven Levy who wrote this the same Steven Levy who wrote _Hackers_?

~~~
michaelwww
Even Steven: [https://twitter.com/StevenLevy](https://twitter.com/StevenLevy)

------
kordless
> unconstrained by a need to generate financial return

AI should definitely be constrained by financial means. Computing, unbounded
by financial constraints, will eat everything.

------
coldtea
> _How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over_

Well, for Y Combinator is easy: by ensuring funding goes to "Uber for X" and
"Facebook for Y" startups instead of real technology advancing businesses

/s

------
codeulike
So all this will end with the red open-source AI Jaeger mech battling against
the grey corporate AI Jaegers among the ruins of our cities. Thanks, Elon.

------
zobzu
I want computers to take over.

------
perseusprime1
Fix my autocorrect

------
phlandis
At the same time couldn't this just make it easier for rogues to fork?

------
theklub
Computers already took over.

------
gist
"OpenAI is a research lab meant to counteract large corporations who may gain
too much power by owning super-intelligence systems devoted to profits"

As opposed to (almost) the entire startup ecosystem which is focused on ...
profits.

Edit: And what does "to much power" even mean other than trying to use
hyperbole to make some kind of point.

~~~
daveguy
You missed the part where they say OpenAI is incorporated as a non-profit.

