
#define CTO OpenAI - sama
https://blog.gregbrockman.com/define-cto-openai
======
openthrowawAI
This story makes it crystal clear how much openAI is accelerating research,
for now mostly on reinforcement learning.

I'm sure this is talked about over and over again, but could someone please
lead us through the AI safety rationale behind this? With gym and universe
openAI is slashing a few months (years?) off the singularity countdown, most
likely. What's the upshot? Why is the expected value of these initiatives
positive? The uncertainties seem extremely large.

Edit/PS: To put it more bluntly, and be more specific: are there any projects
that openAI is choosing NOT to pursue even though they would be very
useful/cool for the research community (à la gym and universe), but where it
has had to explicitly restrain itself, because the expected value from an AI
safety perspective is negative?

~~~
gdb
Less about influencing the _velocity_ , more about influencing the
_direction_. Technologies tend to reflect the values of their inventors. We
want to ensure this technology is beneficial to humanity — meaning, that it's
good at all, and that it benefits the many rather than the few.

We also think safety matters, and it should be researched in lockstep with
advances in the capabilities. We have good relationships with MIRI and FHI.
Our safety researchers published (together with Google Brain) a roadmap of
concrete safety problems [1] and work to provide tools to prevent ML systems
from being subverted [2].

No one yet knows the precise details of how AI should play out. But I'd
certainly prefer that, whenever it gets close, one of the organizations
actually making the advances has no incentives besides ensuring a good
outcome.

[1] [https://openai.com/blog/concrete-ai-safety-
problems/](https://openai.com/blog/concrete-ai-safety-problems/) [2]
[https://github.com/openai/cleverhans](https://github.com/openai/cleverhans)

~~~
defen
> Technologies tend to reflect the values of their inventors.

 _Maybe_ for single-use or "constrained" technologies (to be honest I don't
even believe that - how does a B-52 Stratofortress reflect the values of
Orville and Wilbur Wright?). But isn't the whole point of generalized AI that
it's _not like_ other technologies? Even if "regular" technology reflects the
values of its inventors, what reason is there to believe that an AI will? AI
is a technology that can use itself.

~~~
visarga
AI will only have a will of its own it is designed as such, and that means it
would have a reinforcement learning system on top of lower sensory and action
modules. Even if it is based on RL, it will do what it's reward signals tells
it to do.

~~~
defen
> AI will only have a will of its own it is designed as such

Humans weren't designed to have a will, and yet we seem to have them.

> it would have a reinforcement learning system on top of lower sensory and
> action modules.

Isn't that what OpenAI is doing with Universe? It's simulated sensory/action
modules now but I don't see why they couldn't be hooked up to real ones.

~~~
josh2600
> Humans weren't designed to have a will, and yet we seem to have them.

I have no idea how you could possibly infer this.

~~~
defen
Which part? I don't think humans were designed - we're probably the result of
an evolutionary process without intentional design - but "humans were designed
by God to have free will" would be a counter to my statement, yes.

If your complaint is my claim that we have a will, I'm using the common-sense
version encoded into our legal and cultural system. I agree that we don't have
a good concept of what intentions are, or how they causally connect to
actions, but I do know that for at least some of my actions I experience
something called "intent" before I undertake the actions.

My overall point was that the capacity for intent can arise through an
evolutionary process without being designed in, but it does rest on the two
assumptions I just listed.

------
jkrause314
I think there's a bug: this replaces all occurrences of 'CTO' with 'OpenAI'.

~~~
kennu
I thought the article would be about replacing CTOs with AIs, based on the
title.

------
ChuckMcM
Nicely put together, sounds like the first place in a long time where I
simultaneously would love to apply and feel completely intimidated by what is
clearly a really amazing team of people.

Guess I need to figure out how to get more ML papers published!

------
agibsonccc
Congrats on the success so far Greg. Watching the progress has been great.
Being a bit involved with gym and also the unconf has given me a peek in to
how things are run and I have to say - it's a startup. It evolves fast and
isn't afraid to take calculated risks. That's a great thing to have for
something as open ended as what you guys are trying to do.

It looks like a blast and I will continue to keep an eye on things. I used
your previous post to help define a lot of the scope of my position for
myself. I look forward to seeing more of this story unfold.

------
Drdrdrq
Nice write-up - it makes me wish I lived in SF (or USA for that matter) so I
could apply for a job there.

~~~
gdb
Many people weren't living in SF when they applied (though they do live here
now). We will happily sponsor visas!

~~~
Drdrdrq
Thanks for replying! I won't be applying because relocation is not an option
for me in foreseeable future, so I'll stick to local (EU) or remote jobs. But
if there was something that would make me want to move, such a job would be
it. Kudos!

------
charris0
I really enjoyed reading this journey! inspiring and motivating to hear people
centered around a common goal with a unique and forward thinking team
structure.

It made me reflect on my best moments in software development and how they
have been similar short bouts of intensity.. weekend long to 2 week long
periods of intense and undivided focus at progressing a task through to
completion, time where every other concern fades away, a flow state where I
will wake up immediately with new ideas or problem solve in the shower.

Sure, that level of concentration is unsustainable for the long run, but this
article has made me consider that I may have to structure my life around being
able to repeat these kinds of stints more often, 'embrace your funk' as some
would say!

------
faceyspacey
i think they can advance computers' capabilities, but they will never create
consciousness without first understanding consciousness. From my perspective--
if there goal is as lofty as it is regarding achieving singularity and all
that--they are going about this from the wrong angle. Perhaps it's nice to
fuel the ego with the concept that you're safely ushering in the singularity,
but I see it as more of a spiritual and psychological conquest. And if that's
the case, you're likely on more of a path of personal learning, evolution and
growth, rather than building the underpinnings of a future idiocracy.

I think all these guys have gotten ahead of themselves and are dealing with
things they are the least suited to understand. This is spiritual stuff, no
amount of obsessing and hiding behind your computer screen getting high on
delusions of grandeur are gonna do the introspection for them, which is an
absolute prerequisite to having any understanding of consciousness.

I look forward to automation though. Just know the singularity is not coming
any time soon, if ever. It's funny to watch guys who are basically atheists
like Ray Kurzweil with very little spiritual experience think they are playing
god.

Have these people ever consciously astral projected or even tried? I mean
these are people who typically doubt that stuff. However, my perspective is:
if they aren't living in that world--i.e. the world of consciousness
explorers, the mystical, where astral projection is possible, reading the many
books by spiritual journeyman like Robert Monroe, etc--they have a major
blindspot. The most they will ever do is build a computer able to pick ripe
fruit or avoid hitting other cars. This is hardly consciousness. I mean as a
percentage of the capabilities of consciousness, it's like .00001%.
Consciousness is creating ideas. And that's maybe just one aspect of
consciousness. No amount of machine learning will take the place of the higher
truth of our reality system that makes such creativity possible. You will have
to attain that wisdom through dedication to the spiritual path, and even then,
it's likely not something you can just hand out to anyone--at least not in a
way that will be effective for the recipient. You see what I'm saying? You
won't unless you've pursued spirituality and had a few mystical experience of
your own. ..Next: say you attain that piece of wisdom--the way the reality
system works is not so you can easily just automate it. And if you can because
you have now become a consciousness creator, well then, it's likely not
automated through machinery, but through more biological and mental/telepathic
means. Now, you either know exactly what I'm talking about or you have a bunch
of doubt to cast--for what I'm saying is breaking down your own beliefs, and
that's not comfortable for you--but if you do know what I'm saying, then you
know the people behind this OpenAI initiative won't be able to achieve their
aims. I wish them the best of luck though, and look forward to advancements in
machine learning. I think they would be more productive if they understood
what they were up against though. It would allow them to set more realistic
goals--or perhaps set them on a path do the spiritual introspective work
first.

