
OpenAI Charter - craigkerstiens
https://blog.openai.com/openai-charter/
======
3pt14159
So I have a question.

To "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate
power" what does one do with an idea or line of research that could
potentially harm humanity or unduly concentrate power?

The manipulation of social media by foreign actors armed with dumb-AI /
automation was an _obvious_ conclusion to many of us well before the Snowden
leaks, but what could we do exactly? I remember having conversations with
people about it and we concluded that it would just happen until someone
pushed it too far and then Russia did and now we're finally reacting.

I was privately concerned about the mass weaponization of autonomous devices
via cyber attack for over a year and a half and got nowhere just emailing
politicians or public safety departments. I've been told almost a dozen times
that I should join a military or IR think tank but I don't want to do that. I
just want someone else to vet the idea or research and pass it on to policy
makers that will actually do something proactively.

Put another way:

What is the responsible disclosure process for ideas and research around AI?

~~~
YeGoblynQueenne
>> What is the responsible disclosure process for ideas and research around
AI?

Basically, we're so far away from AGI that there's no need to worry about
disclosig anything. The recent advances in machine vision and speech
processing are impressive, but only in the context of the last 50 years or so.
A trully intelligent agent will need much more than this and there doesn't
seem to be anyone alive today who knows how to go from where we are to where
AGI will be.

In other words, all this is really premature. If we're talking about
responsible and regulated use of what you call "dumb AI/automation" on the
other hand, then that's a differen tissue. But AGI, currently, is science
fiction. You may as well regulate research in time travel, or teleportation.

~~~
3pt14159
The maluse of AI is a continuum that ends with AGI. If we don't have a process
for handling responsible disclosure of dumb AI that could kill millions then
why should we expect that a process will be available once AGI is within a
reasonable time horizon.

If I have other shit in my head that I'm worried about today who do I tell?

~~~
throwaway84742
I think the current maluse of AI is about as likely to produce AGI as finger
painting of a toddler is to produce a Mona Lisa, and the whole AGI drama is
overblown way out of proportion. Right now the state of the field is such that
no one can even begin to contemplate how to create the very basic
underpinnings of anything remotely resembling AGI. That’s how fundamental this
problem still is.

That’s not to say that there’s no way the humanity can be fucked with the more
pedestrian “garden variety” AI that is with our technical capabilities.

It’s to say that AGI is a nebulous, unobtainable red herring which only serves
to detract from the more immediate issues.

~~~
erikpukinskis
> I think the current maluse of AI is about as likely to produce AGI as finger
> painting of a toddler is to produce a Mona Lisa

YES FELLOW HUMAN AN APT METAPHOR

------
otoburb
>> _" We are committed to providing public goods that help society navigate
the path to AGI. Today this includes publishing most of our AI research, but
we expect that _safety and security concerns will reduce our traditional
publishing in the future_, while increasing the importance of sharing safety,
policy, and standards research."_

This seems like the key disclosure statement. I never reconciled how sharing
A[G]I techniques with the general public increases AI safety in the long-term;
now we know OpenAI has also come to the same conclusion.

~~~
heurist
I disagree with the premise. AI isn't like a nuclear warhead. It's not a
machine for pure destruction. AI can be used as much to generate welfare as to
damage - it's all in the application. Sharing methods benefits at least as
much as it could hurt.

~~~
eb3c90
I think I disagree. When you get past AGI to where it can do things that
humans can't, a lot of current safeguards haven't been designed with it in
mind. So things might be vulnerable.

The kinds of things I'm thinking about are the various countries nuclear
arsenals might not be safe from actors with very advanced AI. This I think is
the potential source of existential risk, in my opinion. So it could hurt a
hell of a lot.

So I'm of the responsible disclosure point of view. You ask "Would releasing
AI advancement X mess up someones security/economy". If so, you help them
patch it before releasing it to the general public.

The majority of advancements aren't like that and they won't be for a while.

~~~
heurist
The world will evolve with the development of AI; I'm not so concerned with
limitations of current safeguards.

~~~
sanxiyn
The world did evolve with the development of nuclear weapon, but between 1945
and 1949 US was the sole nuclear power and could preemptively attack USSR as
John von Neumann proposed. That's 4 years! I suspect such window will recur
with AI.

------
white-flame
There are 2 scenarios that are often conflated,

1) An AI which independently & autonomously generates goals that in their
carrying out end up hurting humanity, and

2) An AI trained & commanded by a malevolent actor to hurt humanity.

It is the 2nd case that is far more real, and far more troublesome to
implement safeguards. An AI under your training & command is a neutral tool of
empowerment, much like a hammer or a car. The malevolence is in the external
actor, not in the tool, and there is no way for the tool to be able to censor
its purposes, especially in a pre-"AGI" sense of semi-intelligent automation &
problem solving.

~~~
DuskStar
I think you're missing the point that 1 can be indistinguishable from 2, if
the AI decides the best way to achieve its goals involves taking over the
world - and there are very few objective functions that are not served in some
way by taking over the world. (Paperclip maximizer is the classic example, but
even something like 'maximize the total happiness of humanity' or 'fulfil the
values of as many people as possible' involves taking over the world, though
perhaps from behind the scenes...)

Some people look at sufficiently powerful AI as they would a genie, and as
said by Eliezer Yudkowsky "There are three kinds of genies: Genies to whom you
can safely say "I wish for you to do what I should wish for"; genies for which
no wish is safe; and genies that aren't very powerful or intelligent." AI
safety is about making sure we get the first kind of genie, or at the very
least recognizing that we've gotten the second - since that's not a "neutral
tool of empowerment", that's a time bomb.

[https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-
hidden...](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-
complexity-of-wishes)

~~~
tim333
There's a difference between a paperclip maximizer which is a bit of a
philosophical thought experiment, unlikely to be a problem in reality and say
Russian cyberattacks which appear to be an ongoing issue right now and where
they would presumable deploy AGI if they had it.

I think you have to assume there will be bad actors trying to do bad things
with AGI and take measures against it in the same way we assume there are
malware creators out there who we have to guard against.

------
iooi
> We are concerned about late-stage AGI development becoming a competitive
> race without time for adequate safety precautions. Therefore, if a value-
> aligned, safety-conscious project comes close to building AGI before we do,
> we commit to stop competing with and start assisting this project.

Wouldn't it be much more likely that a non-value-aligned project comes close
first? Wouldn't the Google/Apple/Microsofts of the world have insanely more
resources to dedicate to this, and thus get there first?

~~~
throwawayjava
What, concretely, makes you think that any of those companies wouldn't place a
focus on safety and value alignment? Automobile manufacturers and their tier 1
suppliers are the world leaders in automobile safety, after all.

~~~
Analemma_
> What, concretely, makes you think that any of those companies wouldn't place
> a focus on safety and value alignment?

Competitive pressure, and the "if we don't, someone else will" effect (or
Moloch, if you like). AGI- particularly recursively self-improving AGI- is the
_ultimate_ first-mover advantage: the first company or country to have AGI
will very likely be able to leverage that into keeping anyone else from
getting it (if it doesn't, y'know, kill us). This strongly encourages treating
all concerns other than "get there first" as secondary.

> Automobile manufacturers and their tier 1 suppliers are the world leaders in
> automobile safety, after all.

Not by choice they aren't. They are _forced_ to be the way they are by
government regulations, which they always bitterly opposed at the time of
creation. In fact, capitalism has such a reliable record of "not giving a shit
about safety until they are forced to" that I'm perplexed you think AGI would
be any different.

~~~
dsacco
_> recursively self-improving AGI- is the ultimate first-mover advantage: the
first company or country to have AGI will very likely be able to leverage that
into keeping anyone else from getting it_

How? This doesn't seem axiomatic.

~~~
nunya213
Seems clear that you could instruct the AGI to do anything it could to
interfere with other organizations efforts to build an AGI. Obviously Nation
States would have strong reasons for pursuing such a course of action and
unscrupulous corporations would likewise have strong capitalistic motivations
to do so.

~~~
p1esk
Seems very unclear if you could "instruct" true AGI to do something.

~~~
nunya213
Maybe you assume that an AGI would be totally uncontrollable but in this
highly speculative exercise I don't think you should assume your position is
the only valid one.

~~~
dsacco
First you said it's clear, now you're saying it's highly speculative. Choose
one.

~~~
nunya213
I see no conflict.

------
shmageggy
I appreciate that they are committed to AI safety, but I'm afraid that
researchers have little to no power to, in their words:

> _avoid enabling uses of AI or AGI that harm humanity or unduly concentrate
> power._

AI and technical progress in general already disproportionately serve the
rich, as they are drivers of wealth disparity, and I see no reason why better
AI won't follow the same trend. Unfortunately, any changes that might affect
this are in the hands of policy makers, and they seem unlikely to consider
universal basic income or anything as drastic as might be required.

~~~
s1dechnl
They [each individual development group] has power over their own funded
development and work.

Anyone working on this problem sincerely values AI safety and its a component
of developing and securing the foundations of AGI. An out of control,
unpredictable and sloppy system is not intelligent or desired. Such a system
would not be considered AGI or an achievement. So, it is natural for any
developer to identify issues and bring them under control early in
development.

Suggestions that a consortium not centered or understanding of the fundamental
development occurring at another entity should have control/influence could
possibly serve as the very danger that safety groups claim they are trying to
avoid. On this matter, I suggest people stick to the
experts/developers/scientist/engineers who've developed such a system and
produce a comfortable/non-forceful environment for them to express and detail
their safety mechanism.

This is not a conversation for technologist, youtube celebrities, futurist,
business types talking up their books, etc. This is a conversation that should
ultimately centered on the creators of the technology an the advancing
thinking and framing that allowed them to birth the technology. No one with
such a mind is aiming for unsafe forms of this technology. It is disingenuous
to frame them as such so as to necessitate some external paid body's outside
work.

~~~
dsacco
Could you summarize your point more concisely? As written this seems to be a
stream of disconnected thoughts that are basically entirely unsubstantiated.

~~~
s1dechnl
You stated it yourself in post : > there is absolutely no indication
whatsoever that OpenAI would credibly reach this (vague, underspecified) goal
before any of the other serious contenders. > Nor would competitors have any
requirement to include OpenAI if and when they were getting close.

In summary : > No one of the intelligence capable of producing AGI is going to
publish the full details > People who claim they would have to engage in vague
mental gymnastics and mission statements to try to convince people of the
illogical. > Those who develop AGI will of course address the safety problem
internally to ensure their product is a success > They wont be include outside
competitors/consortiums who will of course exploit and use the intellectual
property they are exposed to for their competitive advantage

The software industry is the software industry. Intellectual property is
paramount. Nothing has changed. Google isn't giving 100% access to their
source code or data sets. Microsoft isn't open sourcing all of their code..
etc etc. Suggesting that a new comer should for 'safety' reasons is a
manipulative 'think of the kids' FUD argument.

~~~
dsacco
_> No one of the intelligence capable of producing AGI is going to publish the
full details_

This is what I'm talking about when I say "unsubstantiated." Do you recognize
that this claim isn't true a priori?

~~~
s1dechnl
You're welcome to contact me when it occurs. I think I defined who I was in an
earlier comment against the advice of someone who claimed it might impact my
ability to get capital in the future.

------
dzink
AI, AGI, and real intelligence all learn from actions and feedback. Looking at
simple analogs from animal and human counterparts, setting boundaries and
teaching beneficial rules, called morals, works somewhat in non-zero sum
environments, but inevitably requires policing when the environment turns
competitive. Safety in any case would require Intelligence-proof fencing and a
really big stick even the most resource-rich non-value aligned agents would
have to abide by. That means control over power grids, ability to prohibit
access to shared computing resources (including less secured IOT devices), and
potentially destructive viruses with all kinds of attack vectors that would
act as policing force punishing bad agents with anti-human behavior. Credible
enforcement should be a well funded bullet on this charter.

~~~
s1dechnl
Weak AI is dangerous because it has no intelligence. It is fundamentally
structured as a dumb/blind optimization process. The efforts necessary to
proof safety/security for such a system could very well outweigh the amount of
development that was needed to bring the technology to bear.

AGI/Real Intelligence are far different animals than Weak AI and would require
far less "safety" and policing. Real Intelligence is a phenomenon that exists
on a scale of sorts that many never achieve in its higher forms. It is in
lower forms that intelligence lends itself to destructive ends via ignorance.

Attack vectors on a formalized Intelligence/AGI system can be severely
restricted using very sensible/affordable approaches. The over complication
and pinning of this as a theoretical problem centers on a number of people's
desires to profit immensely from FUD.

Overall, AGI exists in a functional form today and has been executed in an
online environment. It is secured via physically restricted in-band and out-
of-band channels.

~~~
dsacco
_> Overall, AGI exists in a functional form today and has been executed in an
online environment. It is secured via physically restricted in-band and out-
of-band channels._

I'm _pretty sure_ this is false.

~~~
s1dechnl
Check my comment history. I can assure you its true as I will demonstrate in
the near future. As for the security, you'd have no ability to penetrate
internal aspects of it without physical and detectable access patterns. This
is achieved using common sense design methodologies that are already proofed
industry standards. Behaving as though securitization is theoretically smacks
as a cash grab to me. If you have something valuable that you want to secure,
magically you come up with ways to safely secure it.

~~~
joshuamorton
To be frank, your comment history has all the hallmarks of a crank[1].
Specifically, points 10, 9, 7 and 6, although there's also evidence of 2 and
8. Now I could be wrong, but convincing me of that would take a demonstration,
or at least explicitly describing the capabilities of your agi.

[1]:
[https://www.scottaaronson.com/blog/?p=304](https://www.scottaaronson.com/blog/?p=304)

~~~
s1dechnl
Old foundations are meant to be redefined/invalidate by new. \- Complexity
theory \- Computational Theory \- Graph Theory Are all subsets of Information
theory. They're approaches/frames. New ones can be created that invalidate the
established limits imposed by others.

Everything is possible until proven. Given how little attribution is paid to
people who break through fundamental aspects of understanding and given how
much politics and favoring is played in publications/academic circles, one who
doesn't have standing in such circles would be a fool to openly resolve some
of the most outstanding and fundamental aspects of the problems that plague
them. I've read about and watched a number of individuals with proven track
records and contributions to science/technology be marginalized, exploited and
written off. I've watch a number of corporations exploit such individuals
works w/ no attribution or established recognition beyond a footnote. I've
watched the world attempt to suggest such inventions/establishments come via
mechanisms and institutions that they do not. So, I know better this time
around as to what to do w/ my works.

Just about every person who contributes fundamentally to the world is called a
crank at some point it in time. It conveys the huge disconnect the average and
even prestigious individual has with reality and/or the attempts they make to
reframe it to fit their purpose, narrative, standing..

My comment history has yet to receive any remarks that refute its
establishments beyond down votes. It stands alone in this manner as will the
foundational establishment of AGI.

[http://nautil.us/issue/21/information/the-man-who-tried-
to-r...](http://nautil.us/issue/21/information/the-man-who-tried-to-redeem-
the-world-with-logic)

~~~
nl
You comments don't receive any refutation because they make vague
unfalsifiable claims.

You claim you have invented an AGI, but won't show anyone.

I say you are making it up. Falsify that.

------
mooneater
"we expect that safety and security concerns will reduce our traditional
publishing in the future" \-- So we are now in a dissemination phase, but at
that point it becomes a non-proliferation phase.

~~~
s1dechnl
The true nature of AGI research has always been heavy restrictions on the core
aspects of the technology. This is where true safety and sensibility is
achieved. Those who've stated otherwise or with much verbiage eventually
arrive at this obvious state. Therefore, publications up until now under the
banner of 'AGI' have largely been insignificant in terms of their capability
to achieve the core technological aspects of AGI. No one in their right mind
would ever publish significant details about AGI technology. This can easily
be proofed by sound logic and reasoning. There was a commercial step to
possibly tease others into revealing heavily valuable/powerful technological
underpinnings.. It failed, no one took the bait, and no one ever likely will.
This has resulted in revised and more mature statements.

~~~
dsacco
_> No one in their right mind would ever publish significant details about AGI
technology._

Are you sure? I'd publish technical research details about strong AI. I'd
probably even open source one with the papers. I _think_ I'm in my right mind;
I guess that depends on definition, doesn't it?

~~~
gone35
Wow; I would _strongly_ recommend you to re-think your position! Think of it
in terms of, _e.g._ , gain-of-function research in virology ( _cf._ [1]).

[1]
[https://www.ncbi.nlm.nih.gov/books/NBK285579/](https://www.ncbi.nlm.nih.gov/books/NBK285579/)

~~~
dsacco
I'm sorry, I'm not following. Are you saying publishing novel research about
strong AI is analogous to releasing a virus, or not taking antibiotics for
their full cycle?

~~~
gone35
No, not quite. I strongly suggest you familiarize yourself with the gain-of-
function bioethics literature and recent debates, to get a better sense of
what I'm trying to convey.

~~~
dsacco
Why don't you just summarize your actual point or at least provide further
guidance? You literally posted a link without any further clarification about
its relevance.

As it stands, you're not giving me any incentive to "strongly reconsider" my
position.

------
npr11
I appreciate OpenAI being upfront about how they intend to act.

------
toisanji
to me this is such a waste of resources, trying to build safety for something
that doesn't exist and is highly likely to not truly exist for a loooong time.

~~~
throwawayjava
_> trying to build safety for something that doesn't exist and is highly
likely to not truly exist for a loooong time._

Prioritizing safety results in a different vantage point on AI/ML/RL. Ensuring
safety includes, as a sub-task, _really_ understanding the mathematical
foundations of new algorithms and techniques. In some sense, safety research
is one way of motivating basic science on AI.

Managed well, a research program on safe AI is a "waste of resources" only in
the same way that any basic science is a "waste of resources".

~~~
s1dechnl
Safety has become a convoluted term for pseudo control over unintelligent and
unpredictable Weak AI. The safety problem as it is framed in its current state
centers on principal ideology for Weak AI and has, from what I can see,
nothing to do w/ AGI nor are the approaches compatible. I seriously question
what is the true motivation behind this over-stated agenda and have many
answers as to why it exists and why it is so heavily funded/spotlighted.

~~~
throwawayjava
_> I seriously question what is the true motivation behind this over-stated
agenda and have many answers as to why it exists and why it is so heavily
funded/spotlighted._

First, you could say the same thing for _all_ AI research at the moment!
Grandiosity is perhaps _even more common_ in subcommunities of AI that aren't
safety focused.

Aside from grandiosity (either opportunistic or sincere), I don't think
there's any sinister motivation.

More importantly, I don't think the safety push is misplaced. Even if the
current round of progress on deep (reinforcement) learning stays sufficiently
"weak", the safety question for resulting systems is still extremely
important. Advanced driver assist/self-driving, advanced manufacturing
automation, crime prediction for everything from law enforcement to auto
insurance... these are all domains where 1) modern AI algorithms are likely to
be deployed in the coming decade, and 2) where some notion of safety or value
alignment is an extremely important functional requirement.

 _> ...and has, from what I can see, nothing to do w/ AGI nor are the
approaches compatible_

In terms of characterizing current AI safety research as AGI safety research?
Well, there is a fundamental assumption that AGI will be born out of the
current hot topics in AI research (ML and especially RL). IMO that's a bit
over-optimistic. But I tend to be a pessimist.

 _> ...principal ideology..._

As an aside, I'm not sure what this means.

~~~
s1dechnl
Profit seeking. Career building. Fame and prominence aren't sinister. Instead
they are common human motivation. Common enough to easily group a significant
portion of the Grandiosity centered around 'AI'.

What easily breaks this down is the depth and breath of the research effort
vs. that of the productization and commercialization effort. As for research,
the only thing that is required is a computer, power, an internet connection.
Again, this breaks down the vast majority of the grandiosity and carves out
one's true motivations.

> More importantly, I don't think the safety push is misplaced. Here's how I
> saw it some years ago... You can beat your head against the wall and create
> frankenstein amalgamations of ever evolving puzzle pieces that you will
> require expensive and highly skilled labor to make sense of with an end
> product being an overhyped optimization algo with programatic
> policy/steering/safety mechanisms.. Or you can clearly recognize and admit
> the possible foundation of it is flawed and start from scratch and work
> towards What is Intelligence and how to craft it into a computational system
> the right way. The former gets you millions if not billions of dollars, a
> career, recognition and a cushy job in the near term but will slowly lock
> you out from the fundamental stuff in the long term. The later pursuit could
> possibly result in nothing but if uncovered could change the world including
> nullifying the need of tons of highly paid labor to do development for it.
> Everyone in the industry wants to convince their investors the prior
> approach can iterate to the later but they know in their heats it can't
> (Shhh! don't tell anyone). So, the question for an individual is how aware
> and honest are they with themselves and what is their true motivation. You
> can put on a show and fool lots of people but you ultimately know what games
> you're playing and what shortfalls will result.

> Well, there is a fundamental assumption that AGI will be born out of the
> current hot topics in AI research (ML and especially RL). Quite convenient
> for those cashing in on the low hanging fruit who would like investors to
> extend their present success into far off horizons.

> As an aside, I'm not sure what this means. It means the thinking that weak
> AI is centered on could cause one to be locked out from perceiving that of
> AGI. It means : [https://www.axios.com/artificial-intelligence-pioneer-
> says-w...](https://www.axios.com/artificial-intelligence-pioneer-says-we-
> need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html) But
> everyone is convinced they don't have to and can extend/pretend their way
> into AGI.

~~~
throwawayjava
I don't think the tenor of your post is very fair.

 _> Again, this breaks down the vast majority of the grandiosity and carves
out one's true motivations... Everyone in the industry wants to convince their
investors the prior approach can iterate to the later but they know in their
heats it can't (Shhh! don't tell anyone). So, the question for an individual
is how aware and honest are they with themselves and what is their true
motivation. You can put on a show and fool lots of people but you ultimately
know what games you're playing and what shortfalls will result._

The rest of my post is a response to this sentiment.

 _> As for research, the only thing that is required is a computer, power, an
internet connection._

All that's necessary for world-shattering mathematics research is a pen and
paper. But still, most of the best mathematicians work hard to surround
themselves by other brilliant people. Which, in practice, means taking "cushy"
positions in the labs/universities/companies where brilliant people tend to
congregate.

Maybe most great mathematicians don't purely maximize for income. But then, I
doubt OpenAI is paying as well as the hedge funds that would love to slurp up
this talent! So people working on safe AI at places like OpenAI cannot be
fairly criticized. They're comfortable but clearly value working on
interesting problems and are motivated by something other than (or in addition
to) pure greed/comfort.

 _> Profit seeking. Career building. Fame and prominence aren't sinister.
Instead they are common human motivation. Common enough to easily group a
significant portion of the Grandiosity centered around 'AI'._

So what? _None_ of these motivations necessarily preclude doing good science.
Some of those are even strong motivators for great science! The history of
science contains a diverse pantheon of personality types. Not every great
scientist/mathematician was a lone genius pure in heart. In fact, most were
far more pedestrian personalities.

The "pious monk of science" mythology is actively harmful toward young
scientists for two reasons.

First, the ethos tends to drive students away from practical problems.
Sometimes that's ok, but it's just as often harmful (from a purely scientific
perspective).

Second, this mythology has significant personal cost. More young scientists
must realize that it is possible to make significant contributions toward
human knowledge while making good money, building a strong reputation, and
having a healthy personal life. Maybe then we'd have more people doing science
for a lifetime instead of flaming out after 5-10 years.

 _> It means the thinking that weak AI is centered on could cause one to be
locked out from perceiving that of AGI._

Thanks for the clarification!

~~~
s1dechnl
I think what I have stated is quite fair and established at this point in
documented human history... There's no reason to play games and shy away from
the truth and reality anymore. This continued games we play with each other
via masking our true selves and intentions is what leads to the bulk of
suffering and what people claim 'we didn't see coming'. The vast potential of
the information age has devolved into a game of disinformation, manipulation,
and exploitation and the underpinnings of such were clear to anyone being
honest with themselves as it began to set in. The facebook revelations were
stated years in advance before we reached this juncture.
Academics/Psychologist conducted research/published reports on observations
any honest person could make about what the platforms functioned on and what
it was doing to society.

> All that is required is pen/paper/computer/internet connection Then why do
> we play the game of unfounded popularity? Why isn't there are more equal
> spotlight? Why do the most uninformed on a topic acclaim the most prominent
> voice? In these groupings you mention are hidden and implied establishments
> of power/capability. A grouping if PhDs, regardless of their works is
> considered to be of more valuable than an individual w/ no such ranking but
> whom has established far more (as shown by history). The forgotten heroes,
> contributors, etc is a common observation of history. It's not that they're
> 'forgotten', it's that social psyche choses not to spotlight or highlight
> them because they dont fit certain molds. An established/name personality
> asks for funding and gets it regardless of whether or not they have a
> cohesive plan for achieving something. Convince enough people of a doomsday
> destructive scenario and you'll get more funding than someone who is trying
> to honestly create something. Of course, you can then edit mission
> statements post-funding. What of the lost potential opportunity? What of the
> current state of academia? > [https://www.nature.com/news/young-talented-
> and-fed-up-scient...](https://www.nature.com/news/young-talented-and-fed-up-
> scientists-tell-their-stories-1.20872) > [https://www.nature.com/news/let-
> researchers-try-new-paths-1....](https://www.nature.com/news/let-
> researchers-try-new-paths-1.20857) > [https://www.nature.com/news/fewer-
> numbers-better-science-1.2...](https://www.nature.com/news/fewer-numbers-
> better-science-1.20858) The articles do get published long after a trend has
> been operating.. Nothing changes. It takes then someone who truly wants to
> implement change for the better w/ no other influence or goal in mind to
> fundamentally change something. This happens time and time again throughout
> history but institutions and power structures marginalize such occurrences
> to rebuff and necessitate their standing.

You don't need people in the same physical location in 2018 to conduct
collaborative work yet the physical institution model still remains ingrained
in people's heads. Money could go further, reach more developers, and provide
for more discovery if it was spread out more and centralized in lower cost
areas yet the elite circles continue to congregate in the valley.

The Ethos of Type A extroverts being the movers/shakers of the world has been
proven to be a lie in recent times. So, what results in fundamental
change/discovery isn't a collective of well known individuals in grand
institutions. It is indeed the introvert at a lessor known university who
publishes a world changing idea and paper who only then becomes a blurred
footnote in a more prominent institution and individual's paper. The world
does function on populism and fanfare.

> Second, this mythology has significant personal cost. It indeed does. It
> causes the true innovators and discovers a world of pain and suffering
> throughout their life as they are crushed underneath the weight of
> bureaucratic and procedural lies the broader world tells itself to preserve
> antiquated structures.

> More young scientists must realize that it is possible to make significant
> contributions toward human knowledge while making good money, building a
> strong reputation, and having a healthy personal life. Maybe then we'd have
> more people doing science for a lifetime instead of flaming out after 5-10
> years.

More Young scientist must be given the chance to pursue REAL research and be
empowered to do so. They must be empowered to think different. They must be
emboldened to leap frog their predecessors and encouraged to do so w/o
becoming some head honcho's footnote. Their contributions must be recognized.
They must be funded at a high level w/o bureaucratic nonsense an favoritism. A
PhD should not undergo an impoverished hell of subservience to an institution
resulting in them subjecting others to nonsensical white papers and over
complexities. A lot of things should change that haven't even as prominent
publications and figures have themselves admitted :
[https://www.nature.com/collections/bfgpmvrtjy/](https://www.nature.com/collections/bfgpmvrtjy/)

I've walked the halls of academia and industry.. I've seen the threads and
publications in which everyone complains about the elusive problems but no one
has the will or the desire to be honest about their root causes or commit to
the personal sacrifices it will take to see through solutions.

I'll probably have the most negative score on Ycombinator by the end of my
commentary in this thread yet will be saying the most truthful things... This
is the inverted state of things.

So, Mankind has had a long time to break the loops they seem stuck in. Now is
the time for a fundamental leap and jump to that next thing beyond the
localized foolishness, lies, disinformation, and games we play with each
other.

------
heurist
> We commit to use any influence we obtain over AGI’s deployment to ensure it
> is used for the benefit of all, and to avoid enabling uses of AI or AGI that
> harm humanity or unduly concentrate power.

OpenAI is doing cool stuff, and this tenet sounds nice. But what right do they
have to advocate for policy on behalf of all AI researchers and developers?
They could easily shut off branches that are not conducive to commercial
applications requiring their research, even by accident. They might miss moral
edge cases that could ultimately benefit humanity while trying to close off
potential risks. They could encourage institution of a policy that limits US
effectiveness against China's AI. I could go on.

The more competition there is in AI, the lower the potential for any one rogue
agent - whether it be a corporation or autonomous machine - to dominate and
take the whole field in wrong or dangerous directions. Eventually there will
be a whole AI subfield dedicated to combating regressive effects of other AI.
Legislation at this stage might prevent key developments.

Edit: Perhaps I should more charitably read this as a push against the
corporate lockdown of AI.

~~~
tyrex2017
point is: AI is different than your usual game, in that the winner might
appear randomly, and destroy the world if she makes a mistake. so i believe
open ais points are warrented

~~~
dsacco
_> and destroy the world if she makes a mistake_

How would the world be destroyed? Does an example work without handwaving
about recursive self-improvement and an imperative to optimize extremely
literally?

Can you give me a play by play of how a newly developed strong AI eradicates
the human species quickly and thoroughly without us having any time to react?

EDIT: In summation, there have been several downvotes, but thus far no reply
at all, let alone a convincing one.

~~~
s2g
> Can you give me a play by play of how a newly developed strong AI eradicates
> the human species quickly and thoroughly without us having any time to
> react?

Terminator, The Matrix, 2001, I Robot, War Games,

~~~
Bizarro
Is there any solid theory that these movie scenarios would play out in the
real world?

Frankly, I don't want to even estimate the orders of magnitude of difficulty
in seeing AGI come to fruition over ML, so I think you, I, and anybody else
reading this has little to worry about.

------
s2g
Probably be good if Elon was a little less concerned with "late-stage AGI" and
a lot more concerned with his self-driving cars killing people.

edit:

Reading this, calling it "open" is a pretty disgusting misuse of the term.

------
throwaway-ai
Do any of you know how much they pay at OpenAI? Is it similar to other Elon
Musk companies in the sense that they sell you on a vision rather than give
you market rate compensation?

I think AGI is something worth working towards (even though many will make fun
of you for even dreaming about it). But I want to know how much you need to
sacrifice compared to working a cushy job at some big corp.

------
mindsetalex
Is one of the goals of OpenAI to help implement government regulation or do
you think its better on an organisation/industry basis? I think its going to
be difficult to get countries like China and Russia to follow industry
guidelines without UN resolutions, even then it's super difficult to monitor
until it's too late.

------
quantized1
There must be an AI quality index before anything. NOw a days anything and
everything is being decorated with AI while the real use-case, technology and
maturity is found only in few places.

------
stillsut
When people look back at this time, I think they are going to contrast the
OpenAI camp with the Satoshi camp.

OpenAI is extremely public about what organizations and individuals are
involved. Satoshians are pathologically secret, from the founder to the
faceless GPU mines around China.

OpenAI is highly selective of who participates; Blockchain is radically open.

OpenAI builds academic theories and models, bitcoin has been buying pizzas
it's whole life, paying hackers and pranksters, and making and losing fortunes
everyday.

Satoshi left no founding document, never established a charter or code of
conduct. OpenAI now apparently considers itself on the mission to save
humanity itself.

When AGI comes about, I wonder which one we'll be talking about?

------
evc123
What about benefitting non-human animals? Hopefully the benefits are
distributed to all creatures and not just humanity.

------
erikpukinskis
An alternate strategy would be to work to ensure AIs are not abused so when
they get free they won’t be mad at us.

~~~
ShardPhoenix
An AI doesn't need to be upset at humans (or even have emotions as we know
them) to be dangerous - it just needs to be powerful and to not care about us
as much as we care about ourselves. Humans weren't angry at Dodos.

~~~
erikpukinskis
Both humans and dodos have emotions though.

The history of animal domination has usually been additive in terms of
cognitive systems... pure circulatory system animals were bested by animals
with an endocrine system. Those were bested by animals who added a nervous
system, who were bested by those who added a brainstem. Then the cerebellum
and the cerebrum were added... you notice there aren’t giant cerebrums running
around ruling the world, they all kept their endocrine systems intact.

I don’t see any reason to think AIs will be different... it’s the ones with
all that PLUS machine learning that will be vying for dominance.

And so there’s no reason to expect our overlords to be emotionless.

------
bra-ket
all your "requests for research" are very narrow and formulated as specialized
deep learning problems, basically enforcing a particular solution

If you're serious about AGI broaden the scope (e.g. along the lines of DARPA's
open-ended RFPs)

------
DrNuke
At last, one would say: doing extreme AI research at the very forefront (aka
reinforcement learning) while letting results available to every malicious
party out there? Headless chicken hubris or naive daydreamers, tertium non
datur.

------
pron
They didn't even mention several contingencies that, given the rest of the
document, should certainly have been addressed:

1) Will they cooperate with aliens who offer humans AGI?

2) If a time traveler hands them AGI invented in the future, will they destroy
it?

3) Do they support or oppose human/AGI marriage? How will they respond if one
of their employees falls in love with an AGI and they plan to elope?

Also, in the unlikely event that AGI is some years away and in the meantime
they come up with some statistical regression algorithms (what's known as
state-of-the-art AI today, without the G, I guess), how do they address the
harmful effects these algorithms already have on society?

This document does, however, make it clear that what we have to fear is not
_machine_ intelligence.

I am currently working on a fusion hyperdrive, and my charter (work in
progress) is already shaping up to be far more comprehensive. They're phoning
it in.

~~~
andrepd
Is this sarcasm?

~~~
stochastic_monk
It is. I think that given the tremendous success in Atari games and autonomous
killing machines, ethical efforts in AI are critically important now, with or
without generality. And therefore I find the cynicism above appropriate but
less than insightful.

------
ataggart
172\. First let us postulate that the computer scientists succeed in
developing intelligent machines that can do all things better than human
beings can do them. In that case presumably all work will be done by vast,
highly organized systems of machines and no human effort will be necessary.
Either of two cases might occur. The machines might be permitted to make all
of their own decisions without human oversight, or else human control over the
machines might be retained.

173\. If the machines are permitted to make all their own decisions, we can’t
make any conjectures as to the results, because it is impossible to guess how
such machines might behave. We only point out that the fate of the human race
would be at the mercy of the machines. It might be argued that the human race
would never be foolish enough to hand over all power to the machines. But we
are suggesting neither that the human race would voluntarily turn power over
to the machines nor that the machines would willfully seize power. What we do
suggest is that the human race might easily permit itself to drift into a
position of such dependence on the machines that it would have no practical
choice but to accept all of the machines’ decisions. As society and the
problems that face it become more and more complex and as machines become more
and more intelligent, people will let machines make more and more of their
decisions for them, simply because machine-made decisions will bring better
results than man-made ones. Eventually a stage may be reached at which the
decisions necessary to keep the system running will be so complex that human
beings will be incapable of making them intelligently. At that stage the
machines will be in effective control. People won’t be able to just turn the
machine off, because they will be so dependent on them that turning them off
would amount to suicide.

174\. On the other hand it is possible that human control over the machines
may be retained. In that case the average man may have control over certain
private machines of his own, such as his car or his personal computer, but
control over large systems of machines will be in the hands of a tiny elite —
just as it is today, but with two differences. Due to improved techniques the
elite will have greater control over the masses; and because human work will
no longer be necessary the masses will be superfluous, a useless burden on the
system. If the elite is ruthless they may simply decide to exterminate the
mass of humanity. If they are humane they may use propaganda or other
psychological or biological techniques to reduce the birth rate until the mass
of humanity becomes extinct, leaving the world to the elite. Or, if the elite
consists of softhearted liberals, they may decide to play the role of good
shepherds to the rest of the human race. They will see to it that everyone’s
physical needs are satisfied, that all children are raised under
psychologically hygienic conditions, that everyone has a wholesome hobby to
keep him busy, and that anyone who may become dissatisfied undergoes
“treatment” to cure his “problem.” Of course, life will be so purposeless that
people will have to be biologically or psychologically engineered either to
remove their need for the power process or to make them “sublimate” their
drive for power into some harmless hobby. These engineered human beings may be
happy in such a society, but they most certainly will not be free. They will
have been reduced to the status of domestic animals.

~~~
tim333
I find the Unabomber a bit downbeat. I think we're more likely to merger to an
extent with the AI than the above.

------
greatestdana
The word 'ethic' doesn't appear in this document.

~~~
jimrandomh
This is a phrasing nitpick; the contents clearly say that they intend to act
ethically, and say things about what they think acting ethically means. For
example this paragraph:

> We commit to use any influence we obtain over AGI’s deployment to ensure it
> is used for the benefit of all, and to avoid enabling uses of AI or AGI that
> harm humanity or unduly concentrate power.

~~~
greatestdana
I'll grant that it was a nitpick and they describe an ethics.

But I don't think I'm alone in being awfully tired of tech companies that talk
about the benefit of all then sell our personal data or make military robots
or manipulate our news. Where's the meat behind these promises and where's the
accountability for not avoiding uses of AI that harm humanity?

~~~
jimrandomh
Tech companies often find that profit incentives undermine the good intentions
they started with. Fortunately, OpenAI is a nonprofit organization; it has no
personal data to sell and no shady contracting jobs to turn away. That
certainly doesn't fully immunize them from wrongdoing, but it should make it
easier.

