
OpenAI Research - aburan28
https://openai.com/research/
======
cmpb
On their "OpenAI Charter", they list several basic principles they'll use to
achieve the goal of safe AGI, including this one, which I find pretty
interesting:

>We are concerned about late-stage AGI development becoming a competitive race
without time for adequate safety precautions. Therefore, if a value-aligned,
safety-conscious project comes close to building AGI before we do, we commit
to stop competing with and start assisting this project. We will work out
specifics in case-by-case agreements, but a typical triggering condition might
be “a better-than-even chance of success in the next two years.”

If I'm reading that correctly, it means that later on if/when some company is
obviously on the cusp of AGI, OpenAI will drop what they're doing and start
helping this other company so that there isn't a haphazard race to being the
first one, which could result in unsafe AGI. That sounds like a well-
intentioned idea that _could_ cause more problems in practice. For instance,
if there are multiple companies with almost equal footing, then combining
forces with one of them would give a sense of even stricter deadline to the
other ones, possibly making the development even less safe.

Also, they only mention assisting "value-aligned, safety-conscious" projects,
which seems pretty vague. Just seems like they should give (and perhaps have
given) more thought into that principle.

~~~
antonvs
The problem is they're trying to filter the production of a group of unrelated
organizations, without any recognized authority to do so. The vagueness of the
principle reflects the intractability of the goal, in the current environment.

~~~
p1esk
It's not even clear how to evaluate whether anyone "comes close to building
AGI".

Have they defined what "AGI" is supposed to be? I can't find it on their
website.

------
dawidloubser
For anybody else excited when they hear "open" and "AGI" in the same sentence,
if you don't know the OpenCog project, the wiki (especially if downloaded in
book form) makes for fascinating reading:

[https://wiki.opencog.org/w/The_Open_Cognition_Project](https://wiki.opencog.org/w/The_Open_Cognition_Project)

~~~
godelmachine
By book form you mean PDF form?

------
logicallee
A question. It says:

>OpenAI conducts fundamental, long-term research toward the creation of safe
AGI.

What does "safe AGI" mean?

Obviously most readers would immediately think of this: "AGI that won't
enslave humanity, kill millions of people as part of its optimization process,
crash planes, order drones attacks on civilian sectors, etc", and escape its
"cage" (whatever it was supposed to be doing)?

But that seems like a strange and poorly-defined explicit goal. And kind of
early to be putting into an announcement like this. Does it really mean that -
or does it mean something else, more specific - and if so, what?

I would be interested in knowing what the person who wrote that word had in
mind, since I think most people would think of the Terminator series - Skynet
- or The Matrix, etc, when it comes to AGI.

\----

EDIT: To elaborate on why we should define safe! I know what "safe" means when
we say "A memory-safe programming language".[1] It's very specific. In that
sentence it doesn't have anything to do with enslaving humanity, nor does
anyone think it does. Here's are some articles on this exact subject:
[https://www.google.com/search?q=memory+safety](https://www.google.com/search?q=memory+safety)

Further, it's pretty obvious what we mean when we say "a safe autonomous
vehicle" because whether an accident occurs is pretty cut-and-dried. We have
gray areas, for example is a vehicle "safe" if it's driving under the speed
limit and gets into an accident through no fault of it's own, however if it
had had advanced knowledge of all other vehicles heading toward the same
intersection (regardless of visibility) it would not have been in that
accident? Clearly a car that slows down through knowledge a human driver
wouldn't have can be safer than another kind of car. But we still understand
this idea of "safety" when it comes to cars.

But what does "safe" mean when we say "creation of safe AGI"?

It must mean something to be in that sentence. So why and how can you apply
the word "safe" to AGI? What does it mean?

[1] even has a complete Wikipedia article:
[https://en.wikipedia.org/wiki/Memory_safety](https://en.wikipedia.org/wiki/Memory_safety)

~~~
icebraining
Have you tried looking for what they mean by safe AGI?

Ex: [https://blog.openai.com/concrete-ai-safety-
problems/](https://blog.openai.com/concrete-ai-safety-problems/)

~~~
logicallee
No, I wasn't aware of that. This is a great link, thanks!

To anyone else here is their excellent 2016 paper linked from the above:

"Concrete Problems in AI Safety"

[https://arxiv.org/abs/1606.06565](https://arxiv.org/abs/1606.06565)

and direct link to 29-page PDF:

[https://arxiv.org/pdf/1606.06565.pdf](https://arxiv.org/pdf/1606.06565.pdf)
[PDF!]

However, that PDF does not mention "AGI" even a single time, except in
reference 17 "The AGI containment problem". That is actually also avaialable
online here -
[https://arxiv.org/abs/1604.00545](https://arxiv.org/abs/1604.00545) but
doesn't seem to be what they have in mind.

So it seems the use of the term "safe" is actually much more narrow in AI
literature and probably in the mind of the person who wrote that sentence,
than I as a lay reader thought of reading it applied to AGI. It's an
interesting idea.

~~~
fossuser
Other terms that will help in your search are Friendly AI and the goal
alignment problem.

Check these out: [https://intelligence.org/2016/12/28/ai-alignment-why-its-
har...](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-
where-to-start/)

[https://wiki.lesswrong.com/wiki/Orthogonality_thesis](https://wiki.lesswrong.com/wiki/Orthogonality_thesis)

------
ayakura
Best thing I want to get out of OpenAI right now is 10v10 Dota 2 Pros versus
Bots All-star match. Wonder if they got anything out of last year's data from
all the pros playing with their bot...

~~~
rhlala
Or csgo, i think there is more teamplay than dota lol, only thing is need to
take out the aim, shooting stuff

~~~
romaniv
I would be way more impressed if there was an AI that can play Overwatch. It's
one of those games that are fundamentally hard for computers because of meta-
gaming, map differences and ability combinations. It would be much harder for
AI to converge on some single game-breaking trick, or just bruteforce possible
strategies through blind trial and error and memorization.

Also, aim doesn't matter as much, so you wouldn't have to introduce some
artificial restrictions. (Reaction time does give you a big advantage, though.
You can counter a lot of things simply by reacting to voicelines in time.)

------
backpropaganda
It seems like the website is getting ready for an announcement.

~~~
dqpb
How so?

~~~
backpropaganda
It's the same content with a new layout. People usually do this when they're
expecting a lot of traffic on their site, which typically corresponds to
announcements.

------
fouc
I'm not a fan of this website design, it strikes me as as an attempt to look
extra fancy. AI is more human than that.

~~~
spiderfarmer
As a counter example I am a fan of this website design. It's nice, readable
and does not distract.

