
OpenAI should now change their name to ClosedAI - gitgud
https://www.reddit.com/r/MachineLearning/comments/aqwcyx/discussion_openai_should_now_change_their_name_to/
======
gas9S9zw3P9c
Someone asked for a more nuanced perspective, so here we go.

For a lot of AI researchers, OpenAI has been a huge disappointment. We had
hope that OpenAI would be the company to democratize AI with good open source
work, transparency, no PR bullshit (aka DeepMind), and evangelism. That they
would develop in the open, and perhaps even do research in the open. You know,
kind of like the name says.

It all started out okay with their release of OpenAI Gym, tutorials,
leaderboards, and competitions around that. That was when Karpathy was still
there. Over time, many projects have become abandoned, poorly maintained, or
just disappeared [1]. And many projects they promised never happened [2].
OpenAI became just another research lab obsessed with publishing papers in
closed (!) journals, indistinguishable from Google AI, DeepMind, FAIR, MSR,
and the many others.

There is nothing open or different about them. Most paper code is not
published, and even when it is, it's just the typical poorly written and
unmaintained research code that you see from other labs. None of their
infrastructure is open source either, because it's needed to maintain their
competitive advantage to train models and publish research papers. GPT-3 being
offered as a paid API to a select number of people is latest joke in a long
series of other jokes. All of this would be fine, if it was not for the name
and branding of being a transparent and good-willed nonprofit company. It is
just misleading and that rubs many people the wrong way, as if the whole
"open" thing was just a PR stunt.

HuggingFace [0] these days is pretty much what OpenAI should have been, but
only time will tell what happens.

[0] [https://huggingface.co/](https://huggingface.co/)

[1]
[https://www.reddit.com/r/MachineLearning/comments/aqwcyx/dis...](https://www.reddit.com/r/MachineLearning/comments/aqwcyx/discussion_openai_should_now_change_their_name_to/egl03jv/)

[2]
[https://github.com/openai/roboschool/issues/159](https://github.com/openai/roboschool/issues/159)

~~~
steev
I don't see how this is a nuanced perspective - it seems to restate the same
complaints/arguments just about every comment makes in these discussions.

A nuanced perspective would look at the arguments as to why OpenAI is doing
the things they are doing. For example:

* OpenAI publishes in closed journals (actually conference proceedings) because that is where all the cutting edge research is published and reviewed. I cannot recall an OpenAI paper that wasn't available either via arXiv or their website, despite being published in a closed journal. What is the alternative here? Where should they go for quality peer-review? Yes you can argue the peer review at top conferences is not quality, but is worse quality than no peer-review or peer review from open-access no-name journals?

* How does OpenAI make money? How much are they bringing in? How much does it cost to support things like the OpenAI Gym, etc.? How much does it cost OpenAI in terms of bandwidth to host pre-trained versions of GPT-3? At some point a company needs to make money and prioritize resources - they can't give everything away for free in perpetuity.

I don't think these questions have obvious answers - there is give and take.

~~~
joelg
> At some point a company needs to make money

:-/

OpenAI started as a non-profit.

~~~
eightysixfour
Non-profit does not mean “spends money in perpetuity with no revenue.”

~~~
eggsnbacon1
that's not what happened with OpenAI though. They're not a non-profit anymore,
they changed to a "controlled profit" (lol) model.

I didn't know this was even possible/legal. Start as a non-profit for all the
tax advantages and convert to for-profit once you've got a saleable product?
Maybe startups should start doing this

~~~
chrisco255
What's the point? If your business doesn't turn a profit then you don't owe
business income taxes anyways. Most businesses take several years to reach
profitability.

------
csomar
I believe, having worked a little bit with GPT-2, that OpenAI is intentionally
sabotaging access to their AI algorithm and data. They have started this
sabotage with GPT-2 and with GPT-3 they simply didn't open source it.

For GPT-2, their repository
([https://github.com/openai/gpt-2](https://github.com/openai/gpt-2)) is
archived. Which is as good as saying that the project is abandoned and will
receive no updates. The project already doesn't compile/run properly. This
issue
([https://github.com/openai/gpt-2/issues/178](https://github.com/openai/gpt-2/issues/178))
could be solved relatively easily by either a one-line code fix, or merging a
pull request
([https://github.com/openai/gpt-2/pull/244/files](https://github.com/openai/gpt-2/pull/244/files)).
This is not happening and I have a hard time to believe it is on good
intentions.

Oh, and by the way, I believe saying "This AI stuff is dangerous to world",
the same as politicians saying "We need to check your web history for
pedophilia stuff". It's funny how some people don't see the irony in how they
oppose one thing and support another that is practically the same.

~~~
Donthatme
I am not remotely in this field, and have not been following this closely at
all. With that being said, what obligation do they have in maintaining GPT-2?
Did they have some stated commitment that they walked back, or am I missing
something else?

~~~
Frost1x
Their charter is here:
[https://openai.com/charter/](https://openai.com/charter/)

>"We are committed to providing public goods that help society navigate the
path to AGI. _Today this includes publishing most of our AI research_ , but we
expect that safety and security concerns will reduce our traditional
publishing in the future, while increasing the importance of sharing safety,
policy, and standards research."

So, in a perfect world you would not only publish your research but also the
code that is fundamental to it. Maintaining or abandoning research code on the
other hand is an entirely different (costly) story that's simply an artifact
of research focused software development. It is typically abandoned.

Personally, I see a huge flaw in the underlying philosophy. This presumption
that this specific organization is or can somehow be benevolent flies in the
face of all history. With nuclear weapons, most of the scientists regretted
supporting their countries regardless of how beneveloant they thought they
were.

In general, any sort of concentrated power tends to corrupt. It takes a very
special mindset to understand power and refuse to abuse it. I'm not sure this
is something that can be easily learned, trained, or you could expect a large
group with access to the power to all adhere to the principles of.

------
nexuist
The problem is that OpenAI's motto is: "Discovering and enacting the path to
safe artificial general intelligence."

This does not mix well with very common AI tasks, such as facial recognition,
deepfakes and deepnudes. With GPT-3 we are seeing levels of text comprehension
and response that we've never seen in ML before.

How can we abuse this? Well, we can conduct text-based gaslighting and
manipulation on an unprecedented scale. Imagine choosing a Twitter profile to
target, having the AI read every single tweet that person has made, and then
DMing them insults or slurs that would most effectively apply to their
personality. Imagine those Nigerian prince emails but each with a scam
personally tailored to its audience, with any follow up questions answered
with auto-generated but believable lies. Imagine GPT-3 being used to message
young children on Instagram by the millions and tricking them into giving up
personal information or images.

I think, ultimately, there will be no such thing as "safe" AGI, because the
intelligence we already have as humans can be and has been used to hurt
others. OpenAI faces an existential crisis in this regards, and their best
answer so far is simply to control access to the model so they can revoke it
when bad actors are caught. This is something they can't do if the model is
free and open source as we all want it to be.

~~~
csomar
I thought the point of OpenAI was to democratize AI access. By giving access
to _everyone_ , you are leveling the play field. Making it available both to
small, big, good and bad actors.

Right now it's only accessible to _some_ actors. And good and evil are
relative by definition.

~~~
johnwyles
This is the exact point. Rarely has controlling information and science ever
benefit humanity.

------
jcims
One thing that frustrated me about this conversation at the time was that
OpenAI explicitly stated right in the initial release announcement that that
part of their rationale for the staged release process for GPT-2 was to force
the conversation around releasing a powerful capability to the world.
Basically a fire drill for the real thing. This was rarely acknowledged or
discussed, it was almost entirely a combination of dunking on OpenAI for
overhyping the product and excoriating them for interfering with open
research.

I don’t know that GPT-3 is at the level of material damage but it’s clearly
moving in that direction. Watching someone interact with it that has a good
sense for composing the right kind of prompt is a spine tingling experience.

Given how poorly the PULSE demo that hallucinated fuzzy black folks into tan
white people was received, it also makes PR sense not to release pretrained
models that are obviously going to be easily tricked into saying terrible
things. Exposing that as an API gives them the ability to police it a bit and
study how the model behaves in the real world.

Or it could just be a giant sellout and cash grab.

~~~
londons_explore
To be fair, with a multi-terabyte model, the number of people who have the
money to make use of the model is probably only a few hundred...

~~~
dx034
Rather a few thousands at least. Most larger companies have that amount of
data (if they can use the model is another question).

~~~
londons_explore
Yep, but running inference on it at any reasonable performance requires you to
have all of it in GPU RAM - Ie. you need a cluster of ~100 high performance
GPU's.

~~~
trsohmers
The largest version of GPT-3 is 175B parameters, which is ~350GB. I frequently
use 8x and 10x RTX8000 boxes (and can access a 16x JBOG system as well), and
the 8x system would have 384GB of VRAM. These sort of systems start off at
only ~$60k.

------
timkam
Can anyone point to a more nuanced perspective on this? Ideally not only with
regards to OpenAI; it is still quite common that research published in some of
the "top" venues does not disclose the underlying source code.

Anecdote: I once pushed researchers to publish their code when I was peer-
reviewing and it turned out that the code was super slow in comparison to the
benchmark algorithms they compared with (comparison only included accuracy
etc, not speed), something that I am sure the authors were aware of, but chose
to not report in the paper.

~~~
albntomat0
Here's my alternative view:

If OpenAI had open sourced GPT-3, there would be an equivalently angry thread
about how they were endangering democracy/social order/etc, and not being
responsible with the powerful tool they had created (see other threads on HN
regarding those working in facial recognition).

Both that group, and the one posting here have valid points, and there would
be strong, valid critique of their decision no matter which way they chose.

~~~
sudosysgen
But we do realize that sooner or later a model even more powerful will be
released to the public, right?

And as far as protecting democracy, I assure you that the geopolitical enemies
of your nation either have similar models, or will have them very soon. There
is very significant investment on ML models for text manipulation going on
behind closed doors, funded by States.

~~~
albntomat0
> But we do realize that sooner or later a model even more powerful will be
> released to the public, right?

And at that time, we can equally criticize whomever releases it.

> And as far as protecting democracy, I assure you that the geopolitical
> enemies of your nation either have similar models, or will have them very
> soon. There is very significant investment on ML models for text
> manipulation going on behind closed doors, funded by States.

True, but it's a good idea to keep a high bar for developing and using it.
There's a large difference between the resources of a nation state and various
criminal enterprises.

Per [0], GPT-3 took $12 million to train. That does not include the people
with the relevant skills need to train it, and access to the compute hardware.

[0]: [https://venturebeat.com/2020/06/01/ai-machine-learning-
opena...](https://venturebeat.com/2020/06/01/ai-machine-learning-openai-
gpt-3-size-isnt-everything/)

~~~
sudosysgen
I mean, I don't really see the benefit of limiting such a model to such
famously reliable and disinterested actors as nation-states and multinational
corporations.

The kind of criminals that are going to threaten "democracy/social
order/etc..." aren't going to be stopped by a price tag of 12 million dollars,
or 120 million dollars for that matter. Just a single Mexican cartel would
have the resources to pay for that, to say nothing of criminal groups that
work in lockstep with states. The proceeds of crime are _900 billion dollars
per year_. Training GPT-3 is well within the means of dozens of criminal
enterprises.

But sure, we can praise OpenAI for protecting us from small enterprises and
individuals, known to democracy, and instead reserving it for the use of
megacorps and states, that are known not to threaten democracy or social
order.

~~~
albntomat0
Said Mexican cartel would also have to find the appropriately skilled
engineers and compute. There's a gap between having the money available, and
actually executing the project.

This discussion hinges on what level of unpleasantness comes from groups who
don't have $12 million in compute resources and the skills to run large scale
distributed ML training, but could `git pull` a publicly available model, and
hook it up to some Twitter bots.

~~~
sudosysgen
In this world, if you have money you have access. If a cartel wanted access to
compute they can just pay for it. If they want experts, trust me they can pay
for it.

------
dandanua
It's inevitable that AI research will be more and more closed. AI is a weapon
which is potentially much more dangerous than nuclear. It's naive to think
that the most dangerous weapon will be accessible to a general public.

~~~
vasili111
AI is dangerous when it is closed and inaccesseble to majority of people.

Lets take deepfake. If it was closed and inaccessible to majority of people it
could be used as a powerful weapon (making influence on politics, etc). Now
when everyone has access to it and aware that that kind of fakes is not hard
to make, it is not a weapon anymore. Of course you can still use it as a
weapon, especially in not well developed countries, but even there after some
time it will become useless as a weapon too. I think person who made deepfake
saved many people lives and that is the best approach to deal with so called
"danger of AI".

~~~
dandanua
I don't see how deepfake is less dangerous through it availability. Because
everyone is now aware? That doesn't mean this tool has to be open. Also no one
is making a proper fact-check nowadays and a deepfake attack can do a lot of
harm anyway. Or maybe because you can retaliate with counter-deepfake? Well,
that's just ridiculous.

What I see is that deepfake is used for a fake-porn videos with a known
actresses or some ordinary people for harassing and blackmailing them.

I don't see how its availability saves lives.

~~~
vasili111
Because people are now aware that that kind of fakes exist and it is much
harder to use fakes in more serious things like politics.

------
Nuzzerino
There's always OpenCog:
[https://github.com/opencog](https://github.com/opencog)

~~~
mindcrime
Now is an especially good time to join the OpenCog project. Ben and the folks
behind it are just now starting a big push to re-architect big chunks of it,
including the AtomSpace. And OpenCogCon was just held as a two day virtual
conference, and the entire thing is available on Youtube.

I'd encourage anyone who's interested to join the Opencog Slack and/or Google
group and check out what's going on.

~~~
gardenfelder
Day 1:
[https://www.youtube.com/watch?v=iZOhZFd52y4](https://www.youtube.com/watch?v=iZOhZFd52y4)

------
r34
I'm not surprised. Since I've seen S. Altman talking in one of his videos
about "things that you (the employer) can't tell to your employees" I
completely don't trust that guy.

------
occamrazor
OP refers to GPT-2. OpenAI has since then published both the code and the
trained model.

~~~
moonchild
From the gpt2 code release[1]:

> We are still considering release of the larger models.

Is there something I'm missing?

1\. [https://github.com/openai/gpt-2](https://github.com/openai/gpt-2)

~~~
patresh
That part of the Readme seems to be out of date, they released the largest
GPT-2 model last year
[https://www.openai.com/blog/gpt-2-1-5b-release/](https://www.openai.com/blog/gpt-2-1-5b-release/)

~~~
gardenfelder
full reference: [https://opencog.org/2020/07/virtual-opencogcon-
july-15-16/](https://opencog.org/2020/07/virtual-opencogcon-july-15-16/)

------
minhazm
One of the early stated goals of OpenAI was to advance AI but do so safely.
Can you imagine if scammers and "bad actors" got access to GPT-3? You could
create such sophisticated and targeted scams. Things could get extremely ugly
fast. The choice to limit access to this seems to be completely in line with
their original values.

------
jchw
(2019)

------
Kutta
OpenAI should have never been open, named "OpenAI", or advertised itself as
being open. At the time of OpenAI's inception, much of the AI risk community
deemed it as harmful, although that wasn't spoken out a lot, because it is a
delicate affair to criticize misguided effort on AI safety when the status quo
was almost no effort being spent on AI safety at all. It is a minor
consolation that OpenAI turned out to be less open than initially advertised.

[https://web.archive.org/web/20200518081726/https://slatestar...](https://web.archive.org/web/20200518081726/https://slatestarcodex.com/2015/12/17/should-
ai-be-open/)

~~~
AlexandrB
So "open, like a gate" not "open, like a public resource".

~~~
non-entity
Somewhat random but this debate about the "Open" in OpenAI's name reminds me
of OpenVMS which is not open source, but supports "open systems" like POSIX.

------
thedudeabides5
Funny how there's so much emotional processing here from the open-source
community.

What started as a techno-utopian non-profit project has transformed into a
well-branded very much for profit corporation, right under their noses!

I say good for them. Look forward to seeing how the tech plays out.

------
throwaway7281
One good thing about this is: An actually free and open ecosystem will emerge
- it will take time and countless people-hours but it will happen and it will
prevail.

------
eaxitect
OpenAI is open as businesses are open.

------
sharker8
How are people getting gpt-3 api keys? Might I have one?

------
vladimirsvsv77
Agree

------
ypcx
GPT-3 is (the process of) general AI being born in front of our eyes, catching
us unprepared, making us realize that the we may be a bit rigid in
understanding of what _intelligence_ really means. If primed correctly, can
GPT-3 produce worthwhile improvements to its own paper or implementation? Can
GPT-3 be made to prime and query itself in a loop, by asking it to produce a
better query for itself? Can its knowledge be already used to disrupt existing
frontiers of human thought? The simple and safe answer here is that we don't
know yet.

In a world full of ideological adversity and cold-blooded corporate
competition, slowing down the release of technology which may or may not be
capable of a runaway effect (a.k.a. technological singularity) in the hands of
an adversary is an _intelligent_ and prudent thing to do.

~~~
metafunctor
Nope. The simple and safe answer to all those questions is certainly “no”.
GPT-3 not intelligent in the way we most of us would describe intelligence.

~~~
icebergwarrior
I don't think you can make such definitive statements.

GPT-3 is certainly intelligent the way a lot of us would describe
intelligence. It can produce content in a way this is indistinguishable from
humans.

We don't know what else it can do. We don't know the pace of improvements
happening here. There are a lot of open questions.

------
longtom
I have long suspected that they are already being tightly controlled by the
U.S. military. They obviously know what is at stake and they logically need to
maintain AI supremacy. We are amidst of an AI arms race and it won't stop
until we reach singularity or people start nuking each other to thwart a major
cleansing operation with slaughter nanobots.

