
Inside OpenAI - brtkbrtk
http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/
======
freddealmeida
I often wonder about this. Four things are needed to truly build advanced AI's
(read deep learning, deep reinforcement learning): new algorithms, complex
data sets, and advanced GPU based computing (optimally GPU in any case) but
also an open community.

I think AI research is one of the most open, and this openness is really at
the center of its growth. So I am happy OpenAI has started since it is within
this vein of sharing that the community has already built. But I certainly
don't fully grasp how it can open AI to the world unless it can share rather
valuable data sets (often impossible to get data such as personal health
record), and make computation much much cheaper.

Let me illustrate my concern; Alphago required not just 30m game sets and
complex understanding of both a policy and value network design, but also 1000
CPU and 200+ GPU instances. Something on the order of a few million dollars to
build and utilize.

I look forward to the work coming from OpenAI. I hope it lives up to the hype.
But I believe AI will more than likely remain squarely in large enterprise
since the cost of developing applications will be high for the short and
possibly for the near long term.

~~~
AndrewKemendo
_Four things are needed to truly build advanced AI 's (read deep learning,
deep reinforcement learning): new algorithms, complex data sets, and advanced
GPU based computing (optimally GPU in any case) but also an open community._

Actually we have no idea what the constituent parts of AGI are.

What you mention are the current state of the art for narrow AI projects like
classification and segmentation - which is basically 100% of machine/deep
learning currently, but are not generalizable yet.

As an example the pre-eminant biologically inspired computing researcher
Richard Granger is skeptical (and I agree) that parallel silicon will be able
to scale to the flexibility that we see in biological learning (aka General
intelligence).

Based on what I see so far from OpenAI I don't see them getting to AGI. They
haven't stated it as an explicit goal, I think because they don't have a
pathway (nobody does by the way).

~~~
maaku
You are correct to point out that machine learning is NOT general
intelligence, and what OpenAI is working on really have very little to do with
AGI and super-intelligence, sadly.

But how can you say "we have no idea what the constituent parts of AGI are" or
"they don't have a pathway (nobody does by the way)"? There is an active and
vibrant (if sometimes eclipsed) AGI community. There is an annual AGI
conference. There are a half dozen or so actively developed AGI projects with
comprehensive architectures with attached roadmaps. It's an active area of
research, but it's not like we have no idea how to build a general
intelligence, or what such an architecture might look like.

~~~
AndrewKemendo
Uh, I go to the same conferences - in fact I'll be at AGI 16 this year and I
was at AGI 14. Ben was my research advisor for my Masters.

I stand by my statements. The community, or even a handful of researchers
haven't come up with a competent path to AGI. That's indisputable.

 _it 's not like we have no idea how to build a general intelligence, or what
such an architecture might look like_

Show me one, I'd love to see it.

Listen, I love everyone working on them and many are my friends; but none of
the attempts today have anything near the specificity of a project management
roadmap to say with any certainty that AGI is even a probable outcome. Not
OpenCOG, not Numenta, not MicroPsi.

That's not a hit on any of them either. The people and areas they are working
on are awesome, amazing and fundamental to research but none of them would
claim that they have a solid roadmap. Even the _roadmap sessions_ at the
conferences usually go nowhere because we just don't know enough about how
generalizable intelligence works yet.

~~~
maaku
> I stand by my statements. The community, or even a handful of researchers
> haven't come up with a competent path to AGI. That's indisputable.

The keyword there is competent. You're making a subjective evaluation. Given
your CV you must surely be aware that Goertzel has a 1,000 page book (two
volumes, actually) laying out in great detail his roadmap to human-level
intelligence. The leaders of other major projects in this space have their own
ideas which they talk freely about at the AGI conferences, and are written
down to varying degrees.

~~~
argonaut
> book... roadmap... ideas

Notice a pattern?

Meanwhile, in deep learning (and FWIW I don't think any deep learning
researchers are under the illusion deep learning provides a path to AGI),
there are:

working systems that outperform humans at narrow visual tasks (image
classification, segmentation, etc.), a working Go bot, early prototype systems
that caption images, the list goes on and on.

------
mlinksva
> OpenAI is not a charity.

[https://backchannel.com/how-elon-musk-and-y-combinator-
plan-...](https://backchannel.com/how-elon-musk-and-y-combinator-plan-to-stop-
computers-from-taking-over-17e0e27dd02a) includes quote from Musk:

> And as a result of a number of conversations, we came to the conclusion that
> having a 501c3, a non-profit, with no obligation to maximize profitability,
> would probably be a good thing to do.

A 501c3 is a charity. Is "not a charity" inaccurate, or did OpenAI decide on
some other organizational form, perhaps a trade association?

~~~
studentrob
This may help: Public Charity vs. Private Foundation [1]

Generally, I think charities are understood to use most of their funds to
directly support the people in need rather than spending money on
administration of the charity.

By the way, non-profits are not viewed as inherently good by those within the
sector,

> Charity non-profits face many of the same challenges of corporate governance
> which face large, publicly traded corporations. Fundamentally, the
> challenges arise from the "agency problem" \- the fact that the management
> which controls the charity is necessarily different from the people who the
> charity is designed to benefit [2]

[1] [https://www.501c3.org/public-charity-vs-private-
foundation/](https://www.501c3.org/public-charity-vs-private-foundation/)

[2] [https://en.wikipedia.org/wiki/United_States_non-
profit_laws#...](https://en.wikipedia.org/wiki/United_States_non-
profit_laws#Organization)

------
stevesun21
If it is time we, human being should imprint some logic laws into all AI
projects which are intent to create unsupervised AI?

~~~
jbpetersen
That's about as practical as making a computer capable of everything except
intellectual property infringement.

