
OpenAI Team Update - sama
https://openai.com/blog/team-update/
======
latenightcoding
I really like OpenAI's mission and respect the people who work there, but last
time I read their blog, reddit AMA and other posts here it seems that they
don't have any concrete goals

~~~
nl
I think that "Advance Digital Intelligence"[1] is a pretty concrete goal. To
me it means "make computers do things that only humans can do now".

For a research organization I'd question if you'd want something more
specific?

[1] [https://openai.com/about/](https://openai.com/about/)

~~~
IanCal
At what point can you say that has been achieved? A concrete goal to me would
require a way of knowing it's been completed.

> For a research organization I'd question if you'd want something more
> specific?

Definitely. The goal at the moment sounds more like one of a funder, although
again I'd hope for something more solid (achieve human level performance on
problems X, y z for example).

~~~
nl
_At what point can you say that has been achieved? A concrete goal to me would
require a way of knowing it 's been completed._

I'd say that's a good problem to discuss when someone (anyone!) claims they
are close to achieving it.

~~~
IanCal
I'm not really sure that after the fact is the right time to work out what on
earth your goal actually means.

To be more constructive, I'd recommend a short term goal on the lines of

Identify 5 areas which could see significant improvement to the lives of
people within 10 years with the concentrated effort of openai.

This kind of mid goal (work out a plan) feels fairly sensible, and should lead
to research into how tech interacts with people (or doesn't yet) and how open
or underfunded areas may be. Maybe you'll find a big bottleneck that would
help but isn't that commercially viable to investigate.

Now with some identified areas to work in you can make more specific plans or
goals for those. What's the first step?

------
amasad
Coincidentally, I've been reading Paul Christiano's medium posts on the AI
control problem [1]. It's great to see AI Safety research folks join the
OpenAI team.

[1]: [https://medium.com/ai-control](https://medium.com/ai-control)

------
fitzwatermellow
Came across a nice paper by OpenAI's Ian Goodfellow and Google Brain on using
video prediction to model a robot's "after-effects":

Unsupervised Learning for Physical Interaction through Video Prediction

[http://arxiv.org/abs/1605.07157](http://arxiv.org/abs/1605.07157)

------
nhaliday
It looks like the first guy's paper is partially based on an idea well-known
in the competitive programming community (makes sense given the 3 IOI medals).

[http://codeforces.com/blog/entry/18051](http://codeforces.com/blog/entry/18051)

Cool.

------
jpetso
Disappointing that out of 8 hires there's only a single woman, and of course
she was hired for a role that's less deep-researchy than the other new guys.

------
zump
These guys are insane overachievers...

------
infyr
To test out increasingly 'general' and advanced A.I, you need a
sandbox/playground environment...

You can use a real-world environment with a robot (noisy, more latency,
computationally restrictive , more headaches, and extra development time)

Or you can utilize a 'virtual' world. Minecraft is being used as Microsoft's
sandbox/playground. Video games are a good sandbox/playground : physics
engine+sim+a.i hook.

You can generalize this into an observation->action loop platform.

There are many open source platforms out there for this. Take Box2d as an
example which OpenAI put a custom wrapper around.. It's a 2d Physics
simulation engine. 2-Dimensions = less complexity than 3-D. You have your
'physical environment' and then you hook that into your A.I ..
observation/command loop.

There are public Sandboxes/playgrounds and there are private ones. Many of
these engines aren't hard to tie into. I'm sure this was one of the major
things to resolve at OpenAI as it is for anybody in the space : make a
'playground environment for testing your A.I'...

I'm also sure that OpenAI has a more advanced private gym with higher fidelity
links to the A.I like everyone in the space does. Members only ;)

So, OpenAI has packaged a bunch of open source engines/etc into an
approachable platform to make development easier. Also, OpenAI hopes to get
people to upload their results and have them detail how they achieved their
results... Interesting.

A.I development is at the point IMO where you don't need a name or accolades
to contribute. You don't need a PHD. You don't need to be an expert in the
field. You don't need to be some award winning coder. In some ways, that can
harm you w.r.t to having an ingrained view on how to approach problems in a
space that is begging for new paradigms.

Dedicate a solid month and you can have make a virtual A.I gym setup and be
off and running.

If you have done any serious code development, you can easily break into this
space.

The thing that will be the most time consuming will be wrestling with these
packages, dependencies, understanding them, and figuring out how to hook/in
out of them.

It seems openAI has tried to reduce this pain with the release of openAI gym.

However, you'll find, if you get into any serious A.I dev, you're going to
want to start cutting through people's wrappers and add-on layers that add
latency, increase complexity, and keep your A.I away from the heart of the
sim.

You'll want your own custom hooks.... You'll want the code to be as low level
as possible.

Hi there (gdb), cute handle =P.

~~~
lawyao
Box2D is a reasonable environment for testing out reinforcement learning
controllers in a simulated environment.

[http://otoro.net/ml/pendulum-esp-mobile/](http://otoro.net/ml/pendulum-esp-
mobile/)

~~~
infyr
It is. I tried it out but it doesn't have the fidelity w.r.t I/O and 'hooks' I
was looking for. Then again, I'm working beyond reinforcement learning. More
advanced A.I necessitates more advanced gym.

I highlighted box2d as its an opensource physics/sim engine.

The sim/physics engine is 90% of the 'gym'. I'm sure if there was money
involved, the sims/hooks/etc that are being sourced and packaged around could
get a lot more advanced more quickly. i.e :
[https://github.com/erincatto/Box2D/issues](https://github.com/erincatto/Box2D/issues)

------
fjdjdjnddn
So far I've seen lots of press releases and credential waving, but no actual
development plan or progress. Until they start publishing papers a la Google,
I'm putting this in the vaporware basket.

~~~
Analemma_
This is kind of par for the course in non-profit AI research. OpenAI's
spiritual parent MIRI also has something of a problem with lots of press
releases and credential waving with few impactful papers or research.

That sounds a little harsh and I don't really mean in that way. Commercial
AI/ML projects, since they have a clear goal they can measure against (play
Go, drive a car, get better search results) tend to make meaningful progress
that we can see, whereas the more philosophical and research-oriented non-
profits don't have a yardstick like that, and probably won't until we have an
inkling on how to actually make "strong AI". So this organization will
probably seem like vaporware for some time, but hopefully it will prove its
worth in the long run.

~~~
gdb
Don't worry, we can and will do better than that. Please hold us to a higher
standard!

~~~
foobarqux
How does the group work exactly? Is everyone working on their own thing? Is
the group focused on a specific goal?

