
Introducing OpenAI - sama
https://openai.com/blog/introducing-openai/
======
vonnik
> _Musk: I think the best defense against the misuse of AI is to empower as
> many people as possible to have AI. If everyone has AI powers, then there’s
> not any one person or a small set of individuals who can have AI
> superpower._

In a sense, we have no other defense. AI is just math and code, and I know of
no way to distinguish good linear algebra from evil linear algebra.

The barriers to putting that math and code together for AI, at least
physically, are only slightly higher than writing "Hello World." Certainly
much lower than other possible existential threats, like nuclear weapons. Two
people in a basement might make significant advances in AI research. So from
the start, AI appears to be impossible to regulate. If an AGI is possible,
then it is inevitable.

I happen to support the widespread use of AI, and see many potential benefits.
(Disclosure: I'm part of an AI startup:
[http://www.skymind.io](http://www.skymind.io)) Thinking about AI is the
cocaine of technologists; i.e. it makes them needlessly paranoid.

But if I adopt Elon's caution toward the technology, then I'm not sure if I
agree with his reasoning.

If he believes in the potential harm of AI, then supporting its widespread use
doesn't seem logical. If you take the quote above, and substitute the word
"guns" for "AI", you basically have the NRA, and the NRA is not making the
world a safer place.

~~~
athenot
> If he believes in the potential harm of AI, then supporting its widespread
> use doesn't seem logical. If you take the quote above, and substitute the
> word "guns" for "AI", you basically have the NRA, and the NRA is not making
> the world a safer place.

Guns are not exactly good at healing, making or creating.

A better comparison would be knives. Knives can be used for stabbing and
killing but also for sustenance (cooking), for healing (surgery), for arts
(sculpture). So perhaps this is akin to National Cutlery Association (not sure
if such an entity exists but you get the idea).

~~~
nicolashahn
You're right, guns are pure evil. Clearly, we should take them out of the
hands of cops, bodyguards, hunters, and civilians defending themselves.

~~~
gmac
Ironically, you're absolutely right. Cops, bodyguards and civilians defending
themselves generally only need guns because their adversaries have guns. Just
take them out of everyone's hands. I know this works, if you can make it
happen, because I've seen the gun death statistics for countries with
effective gun control.

Hunters are a different case, but their weapons are rather different too. To
be honest I wouldn't that much care about depriving them of a pastime if it
meant turning US gun death figures into European ones. But that's probably
unnecessary.

~~~
Lawtonfogle
>Cops, bodyguards and civilians defending themselves generally only need guns
because their adversaries have guns.

Not at all the case. Guns allow for the physically weak to still have a chance
to defend themselves. On NPR I remember calling for the police to help as her
ex was breaking into the home. They didn't have anyone anywhere near by and
the woman had no weapons on her. The boyfriend ended up breaking in and
attacking her quite badly. He didn't need a weapon and a weapon wouldn't have
made what he did any worse, but it might have given her the chance for the
victim to defend herself or scare him off.

~~~
Lawtonfogle
To clarify, I meant on NPR I remember hearing a story about a woman calling
for the police. Not sure how I forgot to add in about 4 words there.

------
karmacondon
This is a key takeaway: "...we are going to ask YC companies to make whatever
data they are comfortable making available to OpenAI. And Elon is also going
to figure out what data Tesla and Space X can share."

Money is great, openness is great, big name researchers are also a huge plus.
But data data data, that could turn out to be very valuable. I don't know if
Sam meant that YC companies would be encouraged to contribute data openly, as
in making potentially valuable business assets available to the public, or
that the data would be available to the OpenAI Fellows (or whatever they're
called). Either way, it could be a huge gain for research and development.

I know that I don't get a wish list here, but if I did it would be nice to see
OpenAI encourage the following from its researchers:

1) All publications should include code _and_ data whenever possible. Things
like gitxiv are helping, but this is far from being an AI community standard

2) Encourage people to try to surpass benchmarks established by their
published research, when possible. Many modern ML papers play with results and
parameters until they can show that their new method out performs every other
method. It would be great to see an institution say "Here's the best our
method can do on dataset X, can you beat it and how?"

3) Sponsor competitions frequently. The Netflix Prize was a huge learning
experience for a lot of people, and continues to be a valuable educational
resource. We need more of that

4) Try to encourage a diversity of backgrounds. IF they choose to sponsor
competitions, it would be cool if they let winners or those who performed well
join OpenAI as researchers at least for awhile, even if they don't have PhDs
in computer science

The "evil" AI and safety stuff is just science fiction, but whatever.
Hopefully they will be able to use their resources and position to move the
state of AI forward

~~~
mikepalmer
'The "evil" AI and safety stuff is just science fiction, but whatever.'

umm... you can offer proof that we have nothing to worry about?

Does the proof go like: Just as all people are inherently good, therefore all
AIs will be inherently good?

Or is it more like: since we can now safely contain all evil people, therefore
we will be able to safely contain evil AIs?

Sounds to me like there is some risk, no?

~~~
karmacondon
As I've said many times on HN over the years, there is currently no clear path
to science fiction like "AI". To return your question, hopefully without being
rude, is there any proof that AI capable of having a moral disposition will
ever exist?

Andrew Ng (I believe) compared worrying about evil AI to worrying about
overpopulation on Mars. Which is to say, the problem is so far off that it's
rather silly to be considering it now. I would take it a step further and say
that worrying about the implications of AGI is like thinking about Earth being
overpopulated by space aliens. First we have to establish that such a thing is
even possible, for which there is currently no concrete proof. Then we should
start to think about how to deal with it.

Considering how hypothetical technology will impact mankind is literally the
definition of science fiction. It makes for interesting reading, but it's far
from a call to action.

~~~
skndr
Improvements in AI aren't linear, though. Artificial Superintelligence, after
reaching AGI, might happen in the span of minutes or days. I imagine the idea
here is to guide progress so that on the day that AGI is possible, we've
already thoroughly considered what happens after that point.

Secondly - it's not necessarily about 'evil' AI. It's about AI indifferent to
human life. Have a look at this article, it provides a better intuition for
how slippery AI could be: [https://medium.com/@LyleCantor/russell-bostrom-and-
the-risk-...](https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-of-
ai-45f69c9ee204)

~~~
argonaut
> Improvements in AI aren't linear, though

This is a point everyone makes, but it hasn't been proven anywhere. Progress
in AI as a field has always been a cycle of hype and cool-down.

Edit (reply to below). Talk about self-bootstrapping AIs, etc. is just
speculation.

~~~
skndr
Sure, though you can't extrapolate future technological improvements from past
performance (that's what makes investing in tech difficult).

Just as one discovery enables many, human-level AI that can do its own AI
research could superlinearly bootstrap its intelligence. AI safety addresses
the risk of bootstrapped superintelligence indifferent to humans.

~~~
eli_gottlieb
>Just as one discovery enables many, human-level AI that can do its own AI
research could superlinearly bootstrap its intelligence.

Of course, that assumes the return-on-investment curve for "bootstrapping its
own intelligence" is linear or superlinear. If it's logarithmic or something
other than "intelligence" (which is a word loaded with magical thinking if
there ever was one!) is the limiting factor on reasoning, no go.

------
rl3
> _Musk: I think the best defense against the misuse of AI is to empower as
> many people as possible to have AI. If everyone has AI powers, then there’s
> not any one person or a small set of individuals who can have AI
> superpower._

This is essentially Ray Kurzweil's argument. Surprising to see both Musk and
Altman buy into it.

If the underlying algorithms used to construct AGI turn out to be easily
scalable, then the realization of a dominant superintelligent agent is simply
a matter of who arrives first with sufficient resources. In Bostrom's
_Superintelligence_ , a multipolar scenario was discussed, but treated as
unkikely due to the way first-arrival and scaling dynamics work.

In other words, augmenting everyone's capability or intelligence doesn't
necessarily preclude the creation of a dominant superintelligent agent. On the
contrary, if there's any bad or insufficiently careful actors attempting to
construct a superintelligence, it's safe to assume they'll be taking advantage
of the same AI augments everyone else has, thus rendering the dynamic not much
different from today (i.e. a somewhat equal—if not more equal—playing field).

I would argue that in the context of AGI, an equal playing field is actually
undesirable. For example, if we were discussing nuclear weapons, I don't think
anyone would be arguing that open-source schematics is a great idea. Musk
himself has previously stated that [AGI] is "potentially more dangerous than
nukes"—and I tend to agree—it's just that we do not know the resource or
material requirements yet. Fortunately with nuclear weapons, they at least
require highly enriched materials, which render them mostly out of reach to
anyone but nation states.

To be clear, I think the concept of opening up normal AI research is
fantastic, it's just that it falls apart when viewed in context of AGI safety.

------
rjvir
> Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web
> Services (AWS), Infosys, and YC Research are donating to support OpenAI. In
> total, these funders have committed $1 billion

Funny how they just slipped that in at the end

~~~
ipsum2
Note that this is "committed $1 billion", not funded. "although we expect to
only spend a tiny fraction of this in the next few years."

~~~
chimeracoder
> Note that this is "committed $1 billion", not funded.

That same caveat could apply to any fund raised by a venture fund - usually
funds are committed, and the actual capital call comes later (when the funds
are ready to be spent).

It's an important caveat in some circumstances (e.g. it hinges on the
liquidity of the funders, which may be relevant in an economic downturn), but
in this one, I'm not sure it really makes a difference for this announcement.

------
_sentient
$1B in committed funding. Just, wow.

Side note: I wonder if the Strong AI argument can benefit from something akin
to Pascal's Wager, in that the upside of being right is ~infinite with only a
finite downside in the opposing case.

~~~
logical42
Finite downside? What about Skynet?

~~~
webmaven
Technically, even the extinction of humanity is a finite downside.

You would have to posit a sort of hell simulation into which all human
consciousnesses are downloaded to be maintained in torment until the heat-
death of the universe for it to be an equivalent downside.

------
hacker_9
This is about 100 years too early. Seriously why do people think neural
networks are the answer to AI? They are proven to be stupid outside of their
training data. We have such a long way to go. This fear-mongering is
pointless.

~~~
ceejayoz
The linked site says nothing about neural networks.

~~~
hacker_9
_" we've also started to see what it might be like for computers to be
[creative], to [dream], and to [experience the world]."_

All three of those links are about neural networks.

~~~
ceejayoz
So you've turned "what it might be like" with a couple examples of mild AI-ish
tasks that caught the public's attention this year into "the answer"?

------
samstave
This is a serious question:

Should there be an update/amendment/qualification to the laws of robotics
regarding using AI for something like ubiquitous mass surveillance?

Clearly the amount of human activity online/electronically will only ever
increase. At what point are we going to address how AI may be used/may not be
used in this regard?

What about when, say, OpenAI accomplishes some great feat of AI -- and this
feat falls to the wrong hands "robotistan" or some such future 'evil' empire
that uses AI just as 1984 to track and control all citizenry, shouldnt we add
a law of robotics that the AI should ___AT LEAST_ __be required to be self
aware enough to know that it is the tool of oppression?

Shouldn't the term "injure" be very very well defined such that an AI can hold
true to law #1?

Who is the thought leader in this regard? Anyone?

EDIT: Well, Gee -- Looks like the above is one of the Open Goals of OpenAI:

[https://medium.com/backchannel/how-elon-musk-and-y-
combinato...](https://medium.com/backchannel/how-elon-musk-and-y-combinator-
plan-to-stop-computers-from-taking-over-17e0e27dd02a#.yllxt7nqd)

------
vox_mollis
Where does this leave MIRI?

Is Eliezer going to close up shop, collaborate with OpenAI, or compete?

~~~
robbensinger
MIRI employee here!

We're on good terms with the people at OpenAI, and we're very excited to see
new AI teams cropping up with an explicit interest in making AI's long-term
impact a positive one. Nate Soares is in contact with Greg Brockman and Sam
Altman, and our teams are planning to spend time talking over the coming
months.

It's too early to say what sort of relationship we'll develop, but I expect
some collaborations. We're hopeful that the addition of OpenAI to this space
will result in promising new AI alignment research in addition to AI
capabilities research.

------
peter303
Not the first. Back in the 1980s when expert systems were thought to be the
way to AI, there was OpenCyc. Its still around.

~~~
CurtMonash
And just how many microLenats of bogosity does OpenCyc have?

------
baconner
"We believe AI should be an extension of individual human wills..."

I realize that today machine learning really is purely a tool, but the idea
that ai will and should always be that doesn't sit quite right with me. Ml
tech absent of consciousnesses remains a tool and an incredibly useful one,
but in the long term you have to ask the question - at what point does an ai
transition from a tool to a slave. Seems some time off still but I do wish
we'd give it more serious thought before it arrives.

~~~
JoshTriplett
The idea is not that we should build and (try to) suppress a sentient AI; that
would be a bad idea for numerous reasons. However, we don't necessarily need
to build a sentient AI in the first place; we can build a process that has
reasoning capabilities far above human _without_ actually having agency of its
own.

~~~
baconner
See I think that's exactly where it becomes complicated. Can an entity with
reasoning capabilities far beyond that of humans have its agency suppressed
successfully? And is it ethical to do so or is that internally designed
suppression somehow ethically different from the external suppression applied
against human slaves?

If you could engineer a human being with his/her agency removed so that you
could use their reasoning skill without all that pesky self will would that be
ethical?

~~~
JoshTriplett
The way you're asking the question implies that reasoning inherently has
agency/sentience that needs suppressing. It doesn't need to; there's nothing
to "suppress".

~~~
baconner
We don't know that one way or another since such a machine doesn't yet exist.
I'm suggesting that perhaps high level reasoning and sentience go hand in hand
although I can't say that with any certainty.

------
nazgulnarsil
So the idea with differential safety development is that we want to speedup
safe AI timelines as much as possible while slowing down unsafe AI timelines
as much as possible. I worry that this development isn't great when viewed
through this lens. Lets say that DARPA, CAS, and whatever the Russian
equivalent all work on closed source AIs. The idea here might be that open
source beats closed source by getting cross pollination and better
coordination between efforts. The issue is that the government agencies get to
crib whatever they want from the open source stuff to bolster their own closed
source stuff.

------
sethbannon
I can't think of another field of research that's simultaneously brought the
potential to solve all the world's problems and the potential to end life as
we know it. Very appreciative to see so many great minds working on ensuring
AI heralds in more of the former, and none of the latter.

~~~
rfrank
nuclear research.

~~~
sethbannon
How does nuclear research have the capability of solving _all_ the world's
problems?

~~~
rfrank
It doesn't, just like AI doesn't.

------
runevault
So I assume this is one of the projects Sama was talking about in his research
initiatives. Sounds promising.

------
ajtulloch
Congrats! It's a brilliant team, looking forward to great things.

------
colordrops
This reminds me a bit of all the hype around space elevators several years
ago. People were talking about it like it was an inevitable achievement in the
near future, nearly oblivious to the huge challenges and unsolved problems
necessary to make it happen.

I haven't seen anything but very rudimentary single-domain problems solved
that point to incremental improvement, so I'm wondering if these billionaire
investors are privy to demos the rest of us are not, and thus have real reason
to be so cautious.

~~~
tim333
AI has been progressing in a fairly predictable way as computers get faster
and gradually ticking off milestones like beating us and chess and driving
cars with less crashes. There are only a finite number of such skill areas to
tick off.

~~~
argonaut
This is a weird reading of history. AI progress has been anything but
predictable or steady.

~~~
scottlocklin
I'm not sure it has been anything you'd define as "progress" either. "AI
progress" is a lot like progress in controlled nuclear fusion as an energy
source. Aka, there is no such thing, really, though people work on it.

------
cdnsteve
AI is a pretty huge field, what area are they going to focus on specifically?

~~~
vox_mollis
Given Musk's public comments about existential threats, I assume the focus
will be on friendly AI theory and implementations, akin to what MIRI does.

~~~
miguelrochefort
Friendly AI? That's no AI!

I don't understand how Yudkowsky came up with such a ridiculous idea. That's
simply not a constraint you can apply to true AI.

Even if friendly AI was possible, it wouldn't make sense to have it, nor would
any form of regulation enforce it.

~~~
LesZedCB
You should read this! [1] Unfortunately, the problem is not nearly as simple
as you make it seem to be, otherwise this thread wouldn't be here. :)

[1] [http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-2.html)

------
BenjaminTodd
In the spirit of openness, it would be great to see public responses to the
downsides of this approach.

In particular, Bostrom Ch5.1 argues that the lead project is more likely than
not to get a decisive strategic advantage, leading to a winner-takes-all
scenario, which would mean attempts to foster a multipolar scenario (i.e. lots
of similarly powerful AGIs rather than one) are unlikely to work.

In Ch11 he explores whether multipolar scenarios are likely to be good or bad,
and presents many reasons to think they're going to be bad. So promoting the
multipolar approach could be both very hard, and bad.

------
cr4zy
This is great news! I think distributed access, control, and contribution to
the best AI's will help create 'safe' AI's much faster than any AI created in
secret. One thing this does not address, and is something that Jerry Kaplan
has an excellent suggestion his recent book "Humans need not apply", is the
distributed ownership of AI where tax incentives to public companies that have
larger numbers of shareholders, encourages wider distribution of the massive
gains AI will bring to these companies.

I really hope that the training data, as well as code and research, will be
opened up as well, since the public could really benefit from the self-driving
car training data Tesla may contribute[1]. By opening up the development of
this extremely important application to public contribution and the quality
benefits that it brings, we could get safer, quicker realization of this
amazingly transformative tech. As of now the best dataset for self-driving
cars, KITTI, is extremely small and dated. [plug]I am working on a project to
train self-driving car vision via GTAV to help workaround this (please contact
me if you're interested), but obviously real-world data will be better in so
many ways.

[1] [https://medium.com/backchannel/how-elon-musk-and-y-
combinato...](https://medium.com/backchannel/how-elon-musk-and-y-combinator-
plan-to-stop-computers-from-taking-over-17e0e27dd02a#.xvtga98va)

------
erostrate
Anybody knows if there is any chance of OpenAI sponsoring H1B visas?

I love the idea but being in Europe my options for doing serious AI research
outside of academia seem pretty much limited to Google and Facebook.

------
mori
What I want to know is whether there's collaboration with MIRI. On safety,
especially.

~~~
robbensinger
I replied to this here:
[https://news.ycombinator.com/item?id=10721068](https://news.ycombinator.com/item?id=10721068).
Short answer is that collaborations don't look unlikely, and we'll be able to
say more when OpenAI's been up and running longer.

------
BenjaminTodd
If you want to get a job in this area, we wrote a guide:
[https://80000hours.org/career-guide/top-
careers/profiles/art...](https://80000hours.org/career-guide/top-
careers/profiles/artificial-intelligence-risk-research/)

------
jgord
interesting, and I hope they fund some outlier, less established forms of AI.

For example, we may find that massive simulation yields more practical
benefits in the medium term than stronger pure AI / ML, in some domains.

By analogy with research on possibly harmful biosystems, one can extrapolate
the need for a set of agreed / self imposed safeguards on certain types of
strong AI research - eg. make them read-only, not connected to physical
actuators, isolated in a lab - just as you would isolate a potentially
dangerous pathogen in a medical lab.

OpenAI would be the place to discuss and propose these protocols.

A quote from a future sentient AI - "don't you think its a form of racism,
that strong AI abide strictly by the three laws of robotics, but humans do
not?"

~~~
argonaut
They wouldn't get $1B if they didn't do deep learning.

------
bholdr
This is really great, I think. At least, I admire the motivation behind it as
it was outlined by Sam.

However, it seems, YC Research started by bringing in accomplished and well-
known academics in the field. I wonder whether it would've been more
appropriate to focus on providing PhD Scholarship and postdoc fellowship.
Though, I understand and somewhat appreciate the motivation behind bring the
"top-guns" of research into this, I wonder whether bringing passionate and
hungry for knowledge early career researchers could've been a better bet. I am
bias on this, but overall think it would be great to diversify the group and
level the field -- let the randomness of ideas play its role :) Just my 5c.

~~~
nl
Pretty sure a group like that will be looking for postdocs etc.

Andrej Karpathy only completed his PhD this month, so I guess he'd fit into
that category. I imagine he had a few options to choose from.

------
SneakerXZ
I am surprised nobody mentions stupidly smart AI. We can create AI that is
capable of self-replicating very fast and fulfilling some goal.

It could start with a noble idea to build a machine to recycle our garbage and
use the garbage to build more recycle machines. At the end we can have stupid
machines that are perfectly doing their job but because they are capable of
replicating and getting better what they do. They determine if they kill
human, less garbage is created and thus so less work for them.

At the end they will wipe out us. Because the thing that will kill us doesn't
need to be smarter. It needs to be faster and more effective than we are.

------
RoboTeddy
I hope more great researchers recognize the importance of the mission and take
part!

------
sremani
Did not expect Infosys or Vishal Sikka along with what is mostly SV who's who.

------
foobarqux
How is the group structured and operated?

------
argonaut
I find it a bit disappointing that despite originally stating that YC Research
would target underfunded/underserved areas of research, they've decided to
fund and dive into one of the most-hyped, well-funded areas of research: deep
learning, an area of research where companies are hiring like crazy and even
universities are hiring faculty like crazy. I'm reasonably sure all the
research scientists had multiple job offers, and most could get faculty offers
as well.

Instead of funding areas of research where grad students legitimately struggle
to find faculty or even industry research positions in their field, YC
Research decided to join the same arms race that companies like Toyota are
joining.

~~~
pbreit
I'm disappointed that you're disappointed. There's $1b coming from primarily
not YC.

~~~
argonaut
The $ figure is not really my point. It's the focus and attention. The world
is not lacking in research interest in deep learning.

------
selfishAIgen
If I develop any advance AI I will use it for my own wellbeing, perhaps to
live longer and obtain a higher finantial status and fullfill some of my
dreams. Then I would develop a shield to protect myself and the AI from big
corporations and to retain the advantage I got. Perhaps I would try to make
Mars a paradise to live in my a thousand year old life, and find or design a
partner for that long period. Let the machine create the dream.

~~~
selfishAIgen
The first and main test for an advanced AI is to be able to provide its
creator with a big sum of money in a sustainable way. Why would anyone wish to
share such a useful technology? What I think should be handy is to find
experts or partners for protecting the research with strong closed walls, a
womb for the baby AI device to grow up aimed to take over the world of
business to get the necessary resources, probably in a creeply way, to full
expand itself and provide his creator with the best reward you could imagine.

------
altonzheng
Cool that they have $1 billion pledged. Curious how they will decide
compensation, seeing as a lot of these figures would be making a ton in the
industry.

------
viklas
My money (not a billion) is on "Open, Big Learning".

Elon will probably want to build a giga-factory of neurons, then open-source
some pre-trained, general model with a free API.

This is a man building electric cars, off-grid industrial-strength batteries,
rockets and hyper-loops...I don't think publishing more/better research papers
or winning kaggle competitions is the vision.

------
fiatmoney
Will OpenAI be voluntarily subjecting itself to the same regulatory regime for
machine learning research Sam Altman proposed earlier, or have they realized
that would be a complete disaster?

[http://blog.samaltman.com/machine-intelligence-
part-2](http://blog.samaltman.com/machine-intelligence-part-2)

------
spectrum1234
This is awesome.

I was literally just wondering when there will be open sourced AI. I only saw
a few repos on github so figured it would be at least 3-10 years. The fact
that things like this seem to surface so quick, including recent AI
announcements from Google, etc, are a very good signs for AI in the future.

------
mark_l_watson
Sounds great. I was hoping for OpenCog to be a good open source AI framework
but is is difficult to work with (good team; I have worked with several of
them in the past, no criticism intended).

I look forward to seeing how OpenAI uses outside contributions, provides easy
to use software and documentation, etc.

~~~
nicklo
OpenAI seems to be taking a different approach from OpenCog. OpenCog aimed
build a monolithic framework for many existing AI and machine learning
techniques. This has been done many times before.

OpenAI is more about exploring new research areas and pushing the cutting
edge, while publishing papers and sharing code along the while. Both are
admirable goals, but what OpenAI is aiming for has never been attempted
before.

Very excited to see what comes of it!

------
CurtMonash
First in with my recent musings as to whether behemoth companies would own the
AI space.

[http://www.dbms2.com/2015/12/01/what-is-ai-and-who-has-
it/](http://www.dbms2.com/2015/12/01/what-is-ai-and-who-has-it/)

------
dennisgorelik
What problem is OpenAI going to solve?

------
mrdrozdov
Imagine you've programmed a spider-like robot which sole purpose is to
maintain some energy level (by plugging into an outlet), gather resources, and
create a clone of itself when it has enough resources. How do you defend
against something like that?

~~~
kayamon
That isn't really any different from say, a tiger, which is currently facing
extinction due to our actions against it.

~~~
marvin
Or even a bacteria. Thankfully, no current biological entity is sufficiently
versatile to take over the world ;)

------
richardw
How to prevent Future ISIS from getting Future AI, or do we just shift from us
trying to out-think them to our AI trying to out-think their AI?

If the answer to the latter is "resources" then we're back where we started.
Whoever has the biggest AI wins.

The picture seems to be of many AI's all keeping each other in check, but that
outcome seems less likely to result in the AI-UN and more like a primordial
soup of competing AI's out of which a one-eyed AI will eventually emerge.

No matter how human-friendly an AI we build is, competition will be the final
arbiter of whichever AI gains the most leverage. If a bad AI (more aggressive,
more selfish, more willing to take shortcuts) beats a good AI (limits its
actions to consider humanity), we're poked. If any level of AI can invent a
more-competitive AI, we're poked. Once the cat's out of the bag, we have zero
influence and our starting point and current intent become irrelevant.

~~~
andreyf
ISIS does not have access to many CS researchers nor server farms, as far as I
am aware.

~~~
richardw
Yes, but I did say "If the answer to the latter is "resources" then we're back
where we started."

------
nazgulnarsil
I hope there was some consultation with existing AI researchers as this might
screw with their funding (willingness of donors etc.). Would not be a good
sign if this announcement is about coordination and it failed at that right
out of the gate.

------
kumarski
My greatest fears lay well outside the realm of AI.

[http://bit.ly/nitrogenandphosphorus](http://bit.ly/nitrogenandphosphorus)

1 Billion dollars invested in it seems exciting though. Hopefully something
epic comes out of it.

------
sianta
OpenAI might be equivalent to an open global market of graph annotated
microservices that can recombine automatically (and search as deep as
budgeted) towards whatever goal a client will be able to pay for processing.
Not sure if that is safer.

With the right microservices available in the market (including business model
scripts etc - every service could be an automatically pay per use
microservice) automated businesses could be budgeted to search for sustainable
market entities models which could reproduce themselves (copy/create
microservices should be basic) and evolve in global corporations with a life
and objectives of their own. One might need immensely processing budgets to
compete/control with such automated corporations.

Digital and/or biological, it seems, we are exactly in this
business+market+life+AI game. Curious to learn what happens at the next
levels?

------
dkarapetyan
I'll just leave this here: [http://plato.stanford.edu/entries/chinese-
room/](http://plato.stanford.edu/entries/chinese-room/)

~~~
argonaut
Searle (hearsay): "I don't remember what I wrote. I'm not sure I even believe
that anymore." Source: [https://www.quora.com/What-are-some-objections-to-
Searles-Ch...](https://www.quora.com/What-are-some-objections-to-Searles-
Chinese-Room-thought-experiment)

------
jwildeboer
So, um, what is Open about OpenAI? Is it Open Source? Not AFAICS.

------
a-dub
Oh shit. Say goodbye to reasonable g2.8xlarge spot prices...

~~~
argonaut
I'm not aware of any research lab that uses AWS for these things. It's cheaper
to just buy the GPU yourself.

~~~
jedberg
AWS is a sponsor of this, which probably means a bunch of free resources.

~~~
argonaut
g2.8xlarge also only has 4GB of VRAM per GPU, which is too small for most
recent deep learning models. The TitanX GPU has 12GB, by comparison.

------
roborzoid
What if we put "untouched" limitations to AI, that the AI can never break, as
we cannot break certain limitations in a physical world.

------
tangled_zans
Who are the actual staff involved? What sort of things have they worked on and
published before?

~~~
johann28
> OpenAI's research director is Ilya Sutskever, one of the world experts in
> machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The
> group's other founding members are world-class research engineers and
> scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma,
> John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua
> Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group.
> OpenAI's co-chairs are Sam Altman and Elon Musk.

Sutskever is a researcher at Google, worked with Hinton in Toronto and Andrew
Ng at Stanford.

Karpathy studied in Toronto and at Stanford, worked under Fei-Fei Li, worked
at Google. He also has an awesome blog and seems very active and passionate
about computer vision and ML.

Kingma also works with deep neural nets, worked under Yann LeCun (who works at
Facebook)

Schulman is a PhD Candidate at Berkeley with publications at top conferences.

Zaremba is an PhD student at NYU, intern at Facebook. Impressive publication
list and awards.

Abbeel is at Stanford's AI lab.

Bengio is one of the "stars" and celebrated figures of the deep net revival.

Levine is a researcher at Google working on deep nets with many serious
papers.

\---

Basically these are the main domain experts among them. The list is quite
skewed to Google/Facebook, Stanford/Berkeley/Toronto and deep net researchers,
working primarily on computer vision.

~~~
jychang
> Abbeel is at Stanford's AI lab.

Uhhh....
[https://www.google.com/search?q=Pieter+Abbeel](https://www.google.com/search?q=Pieter+Abbeel)
That's a lot of results showing how he's been a professor at Berkeley since
2008.

He received his PhD at Stanford, then went to be a professor at Berkeley.

~~~
johann28
His website didn't load for some reason so I just went with the Google hit's
title. Maybe that was his old page.

------
ultim8k
The future seems to be very interesting on this. I'm very curious.

------
coderKen
where can we get resources like API and documentation of this cool stuff.

------
endergen
Isn't this the plot for Avengers: Age of Ultron?

~~~
stefantalpalaru
> Isn't this the plot for Avengers: Age of Ultron?

It is. Also for Terminator Genisys.

I suspect it was a PR stunt that took a life of its own. These rich/famous
people with zero understanding of the AI field got somehow convinced that they
need to save the world from the highly improbable and they keep going, long
after the movies ran.

It's ridiculous, of course. They might as well pledge funds for OpenTelepathy
and OpenRemoteViewing.

------
fuzzytop130
nice addition

------
zxcvvcxz
Man I dunno about some of this media hype surrounding the topic of AI. I
understand how powerful ML/AI algorithms are for general pattern matching
(with a big enough computer, gradient descent can learn a lot of things...),
but this whole skynet/doomsday fear thing seems ridiculous.

I guess the risk is embedding into systems that manage missiles or something.
But you don't need sophisticated algorithms for that to be a risk, just
irresponsible programmers. And I recon those systems already rely on a ton of
software. So as long as we don't build software that tries to "predict where
the this drone should strike next", we're probably fine. Actually shit we're
probably doing that.. ("this mountanous cave has a 95% feature match with this
other cave we bombed recently..."). Fuuuuck that sounds bad. I don't know how
OpenAI giving other people AI will help against something like that.

~~~
Geee
In my opinion the biggest danger is letting AI to sort our news and search
engine results, social media feeds etc. There was research on Facebook how
they can affect people's moods by using different weights for posts. What
happens when intelligent bots start writing news, blogs, comments, tweets?

In essence, I mean the dangers of using AI for large scale propaganda through
Internet services. The best tools of the most dangerous people and movements
have always been manipulation and propaganda; what if a _perfect_ AI does it?
Could we even notice it?

Even when the AI is given a seemingly safe task, such as "optimize for clicks"
in a news web site, something dangerous might happen in the long run if that's
optimal for clicks.

~~~
camillomiller
Short-term, yes. What's the filter bubble already, if not the outcome of a
super intelligent centralized AI silently and invisibly deciding what's best
for us to see, molding our own artificial world dynamically based on our
supposed preferences, excising away every possible serendipitous misalignment
with our digital self as it is perceived by the machine?

~~~
visarga
I's easy to build uncensored search engines and news feeds to counter the
bubbling effect. Much easier than building AI.

------
necessity
>unconstrained by a need to generate financial return

The incentive, not the constraint, provided by financial return is what drives
innovation the most, aside from (but not mutually exclusive to) necessity.

------
dopamean
In all seriousness... does "just, wow" communicate something different from
"wow?"

~~~
dang
I think it's an interesting language question too! But we detached this from
[https://news.ycombinator.com/item?id=10720212](https://news.ycombinator.com/item?id=10720212)
and marked it off-topic.

~~~
dopamean
What does detached mean? You just removed the comment? I'm certainly not
complaining; I'd just like some clarification on the jargon. Thanks.

~~~
nathancahill
As far as I can tell, detaching a thread moves it from the parent comment to
the parent post. Marking it as off-topic moves it to the bottom, just above
downvoted comments.

------
negrit
Disappointing to see Infosys associated to this initiative.

EDIT: looks like the infosys brigade is downvoting me to hell.

~~~
azzafazza
I seem to have missed a story here. A quick Google search turned up a letter
on Quora, [https://www.quora.com/Is-working-in-Infosys-as-bad-as-
this-l...](https://www.quora.com/Is-working-in-Infosys-as-bad-as-this-letter-
claims-to-be), is that what you are refering too?

~~~
negrit
YC is lobbying to change the H-1B system in order the let startups get more
H-1Bs. Infosys is blatantly abusing and cheating the H-1B system so bad that
startups are getting penalized when sponsoring H-1B visas.

And now YC is getting in bed with infosys...

~~~
dang
The relationships of large organizations can be surprisingly complex; consider
Apple and Samsung. YC isn't large, of course, but Infosys is. The information
content of the OpenAI funding announcement for immigration questions is
probably zero. (No special knowledge behind this comment, just a general
observation.)

Edit: Please don't break the HN guidelines by complaining about downvoting.
Downvotes to your comment upthread are not because of any "Infosys brigade";
they're most likely because it combined oversimplification with negativity and
because it points discussion toward a pre-existing controversy that is off
topic here.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
samstave
Off topic: But I have always wondered why we have a threashold to hit before
we can downvote comments - but why can we never downvote posts? Or is the
karma threshold just really high to have that function?

~~~
dang
By posts do you mean stories, i.e. the kind of submission that appears on the
front page? If so, HN doesn't have downvotes for those. The flagging mechanism
is arguably something similar though.

~~~
samstave
Yes, I did... thanks!

