Hacker News new | past | comments | ask | show | jobs | submit login
Dear OpenAI: Please Open Source Your Language Model (thegradient.pub)
289 points by hughzhang 26 days ago | hide | past | web | favorite | 124 comments



I really strongly disagree with this. I don't have much time to write, but:

1. Photo manipulation has been extremely destructive in a variety of ways. People "know" that photoshop is a thing, yet fake pictures abound at all levels of publications, and unrealistic standards propagate at full speed nonetheless. The effects are wide spread and insidious.

A way to automate the production of median-literacy shitposts on the internet tuned to whatever you want to propagandize would be a devastating blow. People "knowing" that it was possible would do precious little to stop the onslaught, and it all but destroy online discourse in any open channels. I'm certain there would be other consequences that I can't fathom right now because I don't grok the potential scale or nth order effects.

2. There is no fire alarm for AGI[1]. Is this thing the key to really dangerous AI? Will the next thing be? No one knows. It's better to be conservative.

[1] https://intelligence.org/2017/10/13/fire-alarm/


> 2. There is no fire alarm for AGI[1]. Is this thing the key to really dangerous AI? Will the next thing be? No one knows. It's better to be conservative.

This is certainly a point worth discussing, but it's odd for OpenAI to suddenly stand on principle about this specific result.

OpenAI was founded on the idea that public access to AI tools and research would improve and democratize them. Their website and founders make statements like "Are we really willing to let our society be infiltrated by autonomous software and hardware agents whose details of operation are known only to a select few? Of course not." and "...empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower." Meanwhile, their view on AGI risk seems to be that more access will improve scrutiny and oversight. Since day one, they've taken flak from people like Nick Bostrom and Scott Alexander (https://futureoflife.org/2015/12/17/should-ai-be-open) who argued that their stance amounted to publicly sharing untested, hazardous tools.

And now they balked because... they built an essay generator that's not quite good enough to compete with cheap troll farms that governments can already afford? I don't think they're necessarily wrong to not share this, and I know Sam Altman said a while back that they might not release all their source code. But I don't think this should be looked at in isolation.

If the people going to bat for democratized access to AI suddenly decided that the public can't be trusted with access to AI tools, that's worth talking about.


I think they did change their mind on this after considering the AGI risk more and getting a better understanding of it.

I'm not sure where I read a discussion about this, but I do think it was something they've talked about before.


Is OpenAI anywhere near AGI such that it would warrant such concerns?


It's hard to know what near is and what the timeline is.

This article digs into some historical evidence of this: https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no...

From the post:

"Two: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.

In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.

In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.

And of course if you're not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then."

Maybe it's hundreds of years out, but maybe it's not. Since it's hard to know probably better to try and work on the safety aspects now.


Of course, I'm sure you can find plenty of examples where things that are decades away, are described as decades away.


Sure - this isn’t to imply that that can’t happen or even that that isn’t the more common case.

It’s only to say that just because something may seem far away it may not be - even if you’re the person that will invent it only two years later (and therefore are probably in the best position to know).

Given the high stakes of unsafe AGI and this uncertainty it’s probably worth some people working on goal alignment.

This is somewhat unrelated to the recent release though.


You don’t need AGI to have a problem though.

And even before we Have AGI, it may be that we have hybrid human and AI which would come earlier and be equivalent to AGI.


There's an interesting Slate Star Codex interpretation of this question:

https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-g...

(It's not AGI, but it's more of a step than it seems at first.)


I, for one, welcome our new OpenAI overlords.


NLP researcher here.

Leaving aside the discussion on whether the model is really dangerous or not, the paper gives enough details that the model can be reasonably reproduced (and even if it couldn't, it's an incremental result built on other research that is public, the novelty is mainly in scale). Any big company, government or lab with enough compute can recreate it (and maybe improve it) in six months, maybe less.

Not releasing the model doesn't prevent misuse. It just restricts the set of entities that can afford it: only those with enough resources can build it and misuse. Those with less resources cannot, because training it is very expensive.

I personally don't feel very reassured by knowing that only the powerful have access to this. The actors I fear the most are powerful and could train it, anyway. I'd rather it be public so the community can scrutinize it, find out if it's really worth worrying about, and find ways to detect/fight it.


Thanks for laying out your reasoning here :)

However, I don’t think you have fully adressed the worrying point. The concern here is that good quality faking will become less expensive, thus, more faking will happen. Making it even just marginally more expensive means that there will be less faking overall (no matter the resources of actors) which is seen as good.

Thus, your argument towards more equitable access seems to have to explain, how it provides more value than the harm that may be caused by this dynamic, which would in my opinion require some argument for the potential good that could be done by this technology vs. the harm it could cause - which you haven‘t really done. Just arguing for the „balancing of scales“ doesn‘t go far enough.

For example, you can draw an analogy to weapons regulation. Just trying to balance the scales by giving people easy access to weapons has shown to not work too well in practice. The rich still have more money to buy better, bigger weapons or defenses while the poor shoot themselves almost regularly.

So, to make a convincing case for open sourcing this stuff you should provide a good argument for why we are not talking about weapons here but positive, productive goods with low potential for abuse.


Welcome to the post-information world.

We're going to have to get used to this new reality. Just as those living in the era before recorded pictures or words didn't have reliable records to lean on, we'll grow to distrust all of the things that are posted. This will happen naturally as the volume of fakes increases; people will start to develop a doubtfulness around recorded media.

I don't think we need to worry about propaganda and fake news if the Facebooks and Instagrams become flooded with fake posts from the people we know. It'll become expected and mundane. This seems good for the development of a skeptical public.

When we do need to rely on audiovisual records, such as for court cases, we'll have experts that can investigate and verify.


>if the Facebooks and Instagrams become flooded with fake posts from the people we know. It'll become expected and mundane. This seems good for the development of a skeptical public.

This is not how people or trust networks work.

What you are looking at is a sliver of human behavior- the silver lining on the cloud.

Most people instead will be confused, believe incorrect things, get frustrated when it’s wrong and then begin to trust their gut instincts in an information poor environment.

It would be very natural for people in this scenario to switch to old school centralized trust and truth organizations like clans, states and nations.

People need to understand the world and we regularly pay money to find out truth.

Investigation takes time and when the rate at which we can make believable content outstrips the rate at which investigators can be hired to debunk it, then even courts can’t keep up - which happens today.

FurhTher by leaving it to just the courts we also are saying that the casual consumer will not have access to a safe internet.

I argued yesterday that Chinese style great firewalls seem to be the likely model for our future, as opposed to the future democracies hoped to create.


This view of the future is based in fear. We don't need to be "protected" by big brother. Let's see how it plays out.


Calling it based on fear or based on optimism is a luxury of the disconnected.

I am telling you how it is currently playing out on the ground in chat rooms, forums, facebook, Twitter and elsewhere, because I worked part time on moderating these places and saw first hand what was going on.

I’ve spoken to people who do policy for FAANGs and with activists on the ground. This is what is happening, and has happened in third world countries and developing nations without the man power to investigate most crimes.

This is a genuine concern - as long as engagement driven social media exists.


Perhaps as a byproduct we'll end up with more value placed on human-curated and generated information. Just because there's a lot of noise in the world doesn't mean there aren't objective truths. If FB and Insta get shunted off as sources of news, maybe the good ones will be all that remains trusted. Dare to dream.


The idea of Reputation Economy, yea? Combined with the need for good technology that "guarantees" that https://www.washingtonpost.com really is the washington post.

Reminds me of that theory of the mind stuff around trying to figure out if we're really brains in jars or not - we need to find ways to measure our trust of the source, and the means by which the source is transmitted to us.


I wanted to suggest a crypto enclave built into recording devices that could sign recordings with private keys controlled by the manufacturer but even that wouldn't be too hard to circumvent by piping fake images/audio into such an enclave so long as you have hardware access.


Then maybe we've found the limit of miniaturisation - this could mean that we need devices that could be routinely inspected by the users, hardware-wise. Maybe not every device owned by a person would be like that - this would be something you keep in your safe.


For 1. you only need a few bored grads at MIT/Stanford/another top school (where did the DeepFake come from?) that think it's a cool idea and no resistance from OpenAI et al. would matter (it should be fairly simple to reverse engineer small model they released and augment it with more data/layers/loss functions). Soon the "next cool chat app" will have a shitpost generator as its main feature, possibly trained on humor/sarcasm to make people ROTFLing, like Snapchat used to blow people away with (whatever) face filters, making this technique extremely popular.


Nonproliferation strategies do not work with software.

By not open sourcing the model, OpenAI merely slows the proliferation of weaponized language models. Furthermore it insures a temporary period of power imbalance whereby only the most capable can wield these.

The result is the rest of the internet is not equally forced to come up with solutions to increase the resiliency and safety of their online communities.

The internet will need a “captcha for weaponized language models”. We need a level playing field so Google needs a solution as much as any phpBB.


> A way to automate the production of median-literacy shitposts on the internet tuned to whatever you want to propagandize

We already have this: it's called mechanical turk (or the 50 Cent army, if you prefer) and it's happening all over the place. I'm not sure computer-generated posts would make much difference.


it makes it virtually cost less to do...

that is a big deal

50 cents is the difference between real mail spam and email spam.


This is nowhere close to AGI


And nowhere close to "Open," either. They need to change their name if they're going to take this stance.


This model will probably be reproduced quite quickly with in 2-3 months if there is an entity that has the resource and enough incentive to do it. They did release the paper and that should be enough to get to even close


I agree with you, and would suggest that language based AI research needs to be slowed down.

It allows computation owners to scan and therefore create content faster than humans would naturally. That would allow them to pollute and crowd the conversation streams we have used since we learnt to talk, in a way and scale, no predator or propagandist has before.


Who knows, maybe we’d go back to valuing face-to-face communication.


> A way to automate the production of median-literacy shitposts on the internet tuned to whatever you want to propagandize would be a devastating blow.

It would be Shiri's Scissor[1].

[1]: https://slatestarcodex.com/2018/10/30/sort-by-controversial/


Awesome, that belongs on /r/nosleep.

Um, it is fiction, right?


Wow. Thanks. Nice read. As a concept it’s intetesting.

A statement controversial, such that it activates only after discussing it with someone else.

Would Shiri's scissor also be a scissor?


Wow. This looks it would belong on r/nosleep with how close it comes to crossing the uncanny valley.


Thanks for your comments!

On 1) photo manipulation. If "dangerous" means that random, relatively harmless fake facts easily spread, I'm inclined to agree. I'm sure I've been tricked by a whole host of random facts which are probably actually BS based on random stuff I've read online.

On the other hand, if "dangerous" means that a malicious agent can systematically manipulate the public via propaganda a la 20th century Stalin, I think this is basically impossible. Given widespread knowledge of photoshop, changing someone's mind about something significant with a doctored photo seems difficult (it is hard enough to change someone's mind with the truth!). Doctored photographs aren't completely harmless in that they can reinforce existing held beliefs (confirmation bias) and random fake facts aren't totally harmless either, I just think that the danger here is far below what has been implied in other places. In a nutshell, the effects are indeed wide spread, but perhaps not so insidious.

On 2) there being no fire alarm, I'm actually very thankful to OpenAI for raising this discussion. While I disagree with their decision not to open source, this discussion was certainly worth having.


> On 1) photo manipulation. If "dangerous" means that random, relatively harmless fake facts easily spread, I'm inclined to agree.

No, I think it is even more insidious what they were referring to. I think they meant that Photoshop has enabled and continues to enable unrealistic physical standards of beauty and BMI (mostly for women) that are transmitted in advertisements and news articles.

In other words, Photoshop has enabled an alternative visual reality that is not realistic.


But still much less dangerous than stuff like nuclear fallout or orwellian propaganda? This isn't to say that unrealistic beauty standards aren't a problem, just one that isn't worth trying to stop the march of technology for. Also, I think advertisements are more to blame here than photoshop. Do you think the ancient Greeks and Romans look like Michelangelo's David or any of their other statues? They weren't bombarded by ads though.


Can't disinformation campaigns lead to instability and increase the risk of nuclear weapon use?

I think the damage that disinformation can cause to democracy is real and we've already seen some of its effects without the ability to automate these things.


Any kind of art that's good enough could also create an alternative visual reality.


> "dangerous" means that a malicious agent can systematically manipulate the public via propaganda a la 20th century Stalin

You don't need total control to succeed.

The same way that early VOIP calls didn't need perfect sound to be better than normal dial up.

All you need is an economic, efficiency, quality or quantity advantage to create a new weapon/tool.

India is facing a spate of lynchings over child kidnappings - driven by whatsapp forwards that combine videos from brutal mexican gang killings of children, pakistani safety warnings among others. I've seen them, they are being disseminated to people who have never had to harden themselves to the internet or behavior on the net. (There are many people who don't have knowledge of photoshop, and getting the knowledge to them is Hard. Even getting the knowledge to them can be used to weaken Facts instead of propaganda.)

As a result villagers have lynched strangers, mental handicapped people, widows and anyone they suspect of kidnapping or potentially harming children.

And I have NO idea, why or who is cutting up and making these videos to share on whatsapp. Evil manipulators, people who are doing it for laughs, well intentioned but misguided good samaritans? As a result of such forwards, whatsapp rolled out a limit on how many times a message can be forwarded - a feature they have pushed worldwide recently.

We don't need perfection to do damage, we need efficiency. We already have other systems which can make up the difference in results.


> unrealistic standards propagate at full speed nonetheless

"Propagate"? If you mean e.g. unrealistic physical standards, it seems that (at least in the US) in recent years, there has been more and more fat people, and more and more acceptance of "different" bodies (trans, fat, alternative styles - in fashion and society), so if anything, I'd say the reverse...


It's worth noting that there's a surprising difference between the full model (1542M hyperparameters) and the released smaller model in the repo (117M): the full model can account for narrative structure much better (https://twitter.com/JanelleCShane/status/1097652984316481537), while the smaller model tends to go on trainwrecks, especially if you give it a bespoke grammatical structure (https://twitter.com/JanelleCShane/status/1097656934180696064)

The arguments OpenAI have given about people using the model to create fake news are IMO not good; bad actors will still create fake news anyways (or with the paper/repo, they might even create more bespoke models trained on a specific type of dataset for even better fake news). With the open model, we would know how to better fight such attacks.


My cynical impression is that OpenAI figured out a way to get tons of free PR, with the ultimate “humblebrag” of all time. I don’t think they honestly believe that what they’re doing has a material impact, beyond media coverage and people talking about it as we are here. This situation cries out for Occam’s Razor, and when you cut away the fluff it looks and smells like clever marketing, not in the least because it’s worked.


Hi! OpenAI employee and paper co-author here, speaking entirely on my own (nobody else in the company knows I'm posting this). Occam's razor is a great principle, and I completely see how it might point towards your viewpoint. But I'd like to make a few points which will hopefully correct some misconceptions:

1. When we made the decision to do partial release, the talking points were about dangers and benefits of release - reputational benefits was one of the four main benefits we listed and not talked about much. Everyone in the room, including mostly technical non-comms folk, agreed that caution was a reasonable decision, given our uncertainty about malicious use.

2. I was at every meeting we had with the press, prior to release. We had written and rehearsed our major talking points beforehand (and stuck to them). Here they are: what we did technically, how it's different/new, how it's still limited; generality and zero-shot behavior of the model; how the trends of scaling are steady; the realism of the synthetic text, and how we're potentially on a trajectory similar to the synthetic images (we specifically compared GPT-2 to Alec's 2015 work on DCGAN, not the photorealistic stuff we see now); malicious use cases and policy implications; framing our partial release as an experiment and a discussion-starter. Ultimately, we didn't have fine-grained control over the actual articles written.

3. My impression is our samples are a qualitative jump from anything publicly known prior to our release. Most people (whether academics or journalists) who have seen our demo come away impressed; a small number had more muted responses. I would be pretty surprised if there weren't use cases this immediately enabled that previous language models didn't; whether they are more economically-valuable than malicious, I'm unsure (although I have a guess).

4. I personally don't have any evidence that the people I worked with on the release aren't well-intentioned, aren't thoughtful, or aren't smart.

Stepping back a bit, there are definitely aspects of what we did that I already regret, and perhaps more aspects I will regret in the future. I wish we'd had more time to think through every aspect of release, but opening up discussion to the community doesn't seem like a terrible outcome, regardless of how well thought out our decisions were.


This is exactly what you are talking about.

I'm disgusted by such behavior, especially as action is in counter to their stated values.


plus they have opened source the code, and the training cost of the full model is evaluated at 45k$, which is not much for a state actor or a serious org of any kind


$45k is "new car" money, plenty of individuals on this site could afford it if they wanted. Don't even have to be that serious about it.


Exactly, I can't imagine that the OpenAI team would be this naive. Their strategy will do precious little to stop nefarious actors who mean business. It will just narrow the circle of well meaning people who could've participated in research.

Besides, it's only a question of time before someone publishes the full model.

The whole gatekeeping thing just feels like empty posturing.


It's not quite so simple: AFAICS they have not open-sourced the training code, only one for sampling from the pre-trained model.


what would they do with it? It is a text decorator, it can't make sense of itself. maybe they can make 1000 fake facebook accounts that post a lot. still, i doubt facebook uses text to detect bots, its easier to use ip/network patterns.


it's not a text decorator, it's a text generator


i mean, it is not some sentient AI. it takes some text and decorates it with 100 other sentences. most of the time it doesnt make any sense and it doesnt have intent.


Req'd reading: "OpenAI Trains Language Model, Mass Hysteria Ensues" by CMU Professor Zachary Lipton, http://approximatelycorrect.com/2019/02/17/openai-trains-lan...

A salient quote from the article:

> However, what makes OpenAI’s decision puzzling is that it seems to presume that OpenAI is somehow special—that their technology is somehow different than what everyone else in the entire NLP community is doing—otherwise, what is achieved by withholding it? However, from reading the paper, it appears that this work is straight down the middle of the mainstream NLP research. To be clear, it is good work and could likely be published, but it is precisely the sort of science-as-usual step forward that you would expect to see in a month or two, from any of tens of equally strong NLP labs.


Suffice to say, most of what comes out of OpenAI is vanilla type work that I haven't seen go beyond academic research labs. They do spend a lot of time on PR and making stuff look pretty, I guess.


To be fair they've also released some useful tools, such as the AI Gym.


To be fair, all of their useful tools have been deprecated or in 'maintenance mode'. See https://github.com/openai/gym


They list several tests with record breaking results here: https://blog.openai.com/better-language-models/#zeroshot

Are those previous records their own or were those the best scores in the whole field of NLP? It sure looks like a pretty big step forward for one model to break that many records.


My conspiracy theory on this: This entire fiasco is actually an experiment to gauge public reaction to this kind of release strategy. Eventually the point will come where the choice of releasing a model may have real ethical considerations, so they are test driving the possibilities with a relatively harmless model.

Given the, in my opinion, huge overreaction to all of this, I fear this may only encourage AI research groups to be more secretive with their work.


That's pretty much what they are publicly saying, right?

> This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.


Ah I missed that.


they acknowledged even in the original blog post that any well-funded group of NLP researchers would be able to replicate their work within a few weeks/months (including whatever corporation or state or terrorist group you worry about), and that in terms of methodology it is a natural, incremental improvement over existing techniques. so it's sort of obvious that withholding release can't prevent any harm.

the good faith reason to withhold release is as you say -- start a conversation now about research norms so researchers have some decision making framework in case they come up with something really surprisingly dangerous.

the bad faith reason is it gives them great PR. this was surely part of the motivation, but I bet OpenAI didn't quite expect the level of derangement in the articles that got published, and may regret it a little bit.


Can we even say for certain it's an improvement over say something like TransformerXL? As far as I could see, the changes over GPT were a couple extra and tweak to layer normalizations, a small change to initialization and a change to text pre-processing. Other than for pre-processing, I didn't catch anything on theoretical motivations for these choices nor anything on ablation studies. The only thing that can be said for certain is it used lots of data and a very large number of parameters, trained on powerful hardware and achieved unmatched results in natural language generation.


From their blog post:

>These samples have substantial policy implications: large language models are becoming increasingly easy to steer towards scalable, customized, coherent text generation, which in turn could be used in a number of beneficial as well as malicious ways. We’ll discuss these implications below in more detail, and outline a publication experiment we are taking in light of such considerations.

>a publication experiment we are taking in light of such considerations.

So I think you're correct, they aren't really too scared about this one, but they realize they might be soon. Seeing how this turns out (how quickly its replicated, how quickly someone makes a wrapper for it that you can download and run yourself in minutes) will inform for more serious situations in the future.


When you have whole teams doing "AI safety" at FANG, what do you think would be a logical output of their work? Wouldn't it correlate with what we see from OpenAI?


OpenAI is just trying to make a buck. Just like most other programmers, they don't give a damn about how damaging their code is going to be.


This is just false. Cynicism isn't a substitute for knowledge.


OpenAI is literally a non-profit.


Not saying I agree with the parent but while OpenAI is non-profit it's owned by billionaires whose companies directly benefit from their research.


Perhaps they kept it closed source because they sold it to one of those companies? Non-profits are allowed to sell things, after all.


You can literally make millions of dollars working for non-profits. The highest paid CEOs make several million dollars per year and the biggest non-profits have annual budgets in the billions. The annual revenue from all non-profits in the US is in the trillions.


The NFL is a nonprofit organization too...


Do you think that means they don't get paid? Non-profit itself isn't a super meaningful term as far as if profits are being made or not. To be a non-profit, many hoops have to be jumped through, certain forms of profit have to be zero (I don't know the specifics, but a friend of mine was considering starting a non-profit at one point) but the employees are still paid.


I like the work OpenAI has done in the past, so please take my complaints as gentle complaints:

- a very small percentage of generated samples ‘read well’

- I Don’t see any evidence of ‘understanding’, at least in the sense that BERT can identify original nouns that pronouns refer to (anaphora resolution): amazing to get state of the art results

- the ‘we aren’t going to share code and results’ thing reminds me of the movie The Wizard of Oz: trust them that they developed cool technology

- in 6 months, high school students might be getting similar results, given how fast our field is moving forward: this makes the danger concern less important in my opinion

This whole thing seems like a throwback to how science was conducted 10 years ago - ancient times in our fast moving tech world.


> the ‘we aren’t going to share code and results’ thing reminds me of the movie The Wizard of Oz: trust them that they developed cool technology

And this is where it stands: They claim to have something.

They won't release it.

If they had nothing but a somewhat creative streak to make up the results, the world would look exactly the same as it would if they had something.

Them lying is the least hypothesis here.


Given that experts in the field seem to think this is an incremental step forward, OpenAI has already demonstrated a track record of doing things probably more impressive than this, and the massive loss of credibility that would come with being caught in a lie here, on priors we should expect they're not lying.


I thought this blog post response was pretty good:

OpenAI’s GPT-2: the model, the hype, and the controversy https://towardsdatascience.com/openais-gpt-2-the-model-the-h...

I think this is a better response than the article submitted here. There are ethical concerns and risk with these releases and it's probably a good idea to start considering them. What they did release seems reasonable in context.

I find the argument that making the technology as open as possible leading to people taking a skeptical look at things not very convincing. Recent election interference, disinformation campaigns, and the general inability for the public to disambiguate fact from fiction seems like decent evidence that just because people know something can be faked doesn't lead to critical thinking - confirmation bias is strong.


as an FOSSuser you should know that many parts of opensource software used to be outright illegal (e.g. pgp). opensource democratized all of them


I strongly disagree with OP's logic when he says - Photoshop hasn't negatively affected the world "precisely because everyone knows about Photoshop." - the crux of his argument. IMO the main reason Photoshop hasn't affected the world negatively is because it is HARD to make convincing fakes - precisely what OpenAI are ensuring by not releasing the model.

Notice that this reasoning doesn't take into account whether or not GPT-2 can actually negatively impact the world. It says as long as people are aware that the text they are reading could be fake/AI-generated, we'll be fine. I think people are already aware of that. I don't see how releasing the pre-trained model will help with that.


Another thing that has not been discussed yet: OpenAI does not want to be responsible for the output of this model. Can you imagine the headlines? "OpenAI released a text generating AI and it is racist as hell". People should have learned their lesson after the Tay fiasco.

I am ambiguous on the issue of release vs. non-release, but the mockery and derision they faced makes me ashamed to contribute to this field. No good faith is assumed, but projection of PR-blitzes and academic penis envy.

Perhaps AI researchers are simply not the best for dealing with ethical and societal issues. Look at how long it took to get decent research into fairness, and its current low focus in industry. If you were in predictive modeling 10 years ago, it is likely you contributed to promoting and institutionalizing bias and racism. Do you want these same people deciding on responsible disclosure standards? Does the head of Facebook AI or those that OK'd project Dragonfly or Maven have any real authority on the responsible ethical use of new technology?

I am not too sure about the impact of a human-level text generating tool. It may throw us back to the old days of email spam (before Bayesian spam filters). It is always easier to troll and derail than it is to employ such techniques for good. Scaling up disinformation campaigns is a real threat to our democracies (or maybe this decade-old technique is already in use at scale by militaries, and this work is merely showing what the AI community's love for military funding looks like).

I am sure that the impact of the NIPS abbreviation is an order of magnitude lower than that of this technology, yet companies like NVIDIA used Neurips in their marketing PR before it was officially introduced (made them look like the good guys for a profit). How is that for malaligning ML research for PR purposes? Would the current vitriol displayed in online discussons be appreciated when there was a name change proposal for the betterment of society?

Disclaimer: this comment in favor of OpenAI was written by a real human. Could you tell for sure now you know the current state of the art? What would these comment sections look like if one person controls 20% of the accounts here?


> It is always easier to troll and derail than it is to employ such techniques for good.

Curious-- what possible good comes with the ability to generate grammatically correct text devoid of actual meaning?

Sure, you could generate an arbitrary essay, but it's less an essay about anything and more just an arrangement of words that happen to statistically relate to each other. Markov chains already do this and while the output looks technically correct, you're not going to learn or interpret anything from it.

Same goes with things like autocomplete. You could generate an entire passage just accepting its suggestions. It would pass a grammar test but doesn't mean anything.

Chatbots are an obvious application, but how is that "good," or any different than the Harlow experiment performed on humans? Fooling people into developing social relationships with an algorithm (however short or trivial) is cruel and unethical beyond belief.

A photoshopped image might be pleasant or stimulating to look at, and does have known problems in terms of normalizing a fictitious reality. But fake text? What non-malicious use can there possibly be?


Current application: Better character - and token completion helps make it easier for people with physical disabilities to interact with computers. http://www.inference.org.uk/djw30/papers/uist2000.pdf (pdf)

Research progress: Better compression measures the progress to general intelligence. http://mattmahoney.net/dc/rationale.html

Future application: Meaningful completion of questions, leading to personalized learning material for students all over the world. If only there was an OpenQuora.


We incubate harmful viruses and bacteria so we can learn how they work, experiment, and test them. Having the output of the full model could allow analysis of structural weaknesses, or the building of GANs to detect fake text.

The technology is obviously going to go out there, why give well funded actors (nation states, troll farms) a head start, instead of giving everyone (researchers, hobbyists) an opportunity to prevent it?


I don't understand why the choice of opensourcing seems to be so binary for some. It feels as if it's either:

A-Leave everything open and hope that the global interoperability and cathartic flow of information creates a magic utopia for humanity.

B-Everyone incorporates into the soviet-union/chinese-firewall for protection.

There are manageable, more analogue alternatives too, like reducing the risk and probability of a disaster by trusting a human's judgement to limit access to potentially malicious code; which is exactly what they're doing in this case.


I’m agnostic about the short term effects of OpenAI’s decision, but I wonder if and hope that the eventual equilibrium will look more like things did before the internet.

In, say, the 1960’s, if someone stood on a street corner with a sign saying “The End Is Nigh,” most people would react with a shrug. They’d know that if the world were ending they’d probably hear about it from a trusted source like a newspaper or broadcaster.

I’m not worried about being tricked by fakes because I know that the NYT will be doing infinitely more than I ever could to verify hugely important facts, and won’t overreact to random bits of media. If you’re not someone I know personally, and you’re not a trusted source of public information, you may as well be holding a sign on a street corner, no matter how realistic the image or text on your sign.

I think there may be vast unintended consequences that have to do with scale, but intuitively it doesn’t feel like the “who to trust” problem is more than a matter of cultural behaviors around trust adjusting to the internet age.


Yeah, if this tech was the WMD that OpenAI says it is, the NYT would know about it.


I don't buy the argument that open-sourcing would allow citizens to better fight this. Let's be realistic. If the full model is published, for every one person who studies the model to understand how to fight false propaganda, there will be 5,000 who use it to shitposting for lulz.

Also, as everyone's pointing out, the cat's out of the bag. Whatever OpenAI does, the same ability will be in the hands of state actors, and large corporations, and soon not-too-large corporations and dedicated individuals, probably in a few years. So basically whatever OpenAI does today doesn't really matter.

We might as well think about totally different ways of building online communities when everyone has shitpointing sockpuppet engines powered by AWS/Azure DeepLearningToolKit Trial Version.


Or a neural net for detecting fake text.

How insane would it be if the birth of AGI comes from GANs of fake news?


Everything important is open sourced already.

The article cites replication as important and then paradoxically argues for releasing the model in lieu of replicating the training and verifying the result.

This is the AI equivalent of a pharmaceutical company releasing all their research and patents and then taking heat for not providing reagents and doing the organic synthesis for everyone too.


The difference is a pharmaceutical company is for-profit, while OpenAI's (a non-profit) mission (https://openai.com/about/) is "to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible." (emphasis mine)


I don’t see how corporate structure changes anything. It’s a huge gift of tons of work, which is far and away the hard part. They’ve enabled literally anyone who wants to to train their own model on whatever source material they want.


The argument in the article is unconvincing, and shows a distinct failure to think on a societal level or on time lines beyond a few weeks. The calculus seems simple to me. The potential harms from not open sourcing the model completely - minimal. Potential harms from open sourcing it completely - greater than minimal. Hence their decision.

OpenAI should be lauded for making this decision. If you think about the sort of people who would choose to work at OpenAI, it is clear this was a very tough choice for them.


To me it feels like everything about OpenAI's media strategy is calculated and specifically designed to maximize good PR with the public. It leaves a sour taste in my mouth.


It's very weird to me that a team of researchers who are credibly doing good work and are making an effort to clearly explain their research garners such suspicion.

Are you similarly skeptical of Deepmind, Distill, or Two Minute Papers?

Honestly, the norm of providing two versions of your research (arxiv and blog post) makes ML easier to follow and is something I would like more academics to attempt


Here's the most thoughtful piece I've been able to find on the controversial topic so far: https://www.fast.ai/2019/02/15/openai-gp2/


Fortunately, we have outstanding and trustworthy corporate citizens like Google, Facebook, Uber and others that will continue AI research safely, ethically and responsibly without the risk of such technologies falling into the unsteady hands of the public.


Sooner or later we will have tools for e.g. Reddit where one specifies a desired outcome of a discussion and lets bots paired up with some optimization algorithm figure out how to get popularity majority based on comments so far in individual threads and drive discussion towards stated goal. Later the same will be used in elections or other public "theaters" that won't matter any longer as humanity will be "cracked", i.e. modeled sufficiently enough to predict/influence average humans with just tiny remains of feelings that something might be wrong which only complete societal outcasts would pursue.


I'm honestly pretty conflicted here and the best thing that is coming out of this is some healthy debate.

IMHO the strongest argument against publishing it is because its built from Reddit it includes a lot of toxicity... VinayPrabhu originally made that argument here https://medium.com/@VinayPrabhu/reddit-a-one-word-reason-why...

Can't blame them for not opening up that can of PR worms. It could be like a version of Microsoft Tay that can form coherent paragraphs.

The bigger picture is this could be used for some very effective social media manipulation. A bit of clever infrastructure, maybe another generation of improvement and some understanding of human psychology (overton window etc) and managing 100,000 believable human accounts on something like reddit that subtly move debate becomes very doable.

Then again its going to happen anyway so better focus on the defensive techniques now...


They did release the model [0]. A smaller-scale version of it. If you want the big one, all you have to do is scrape 45M web pages, and increase the parameters to scale.

What they didn't release is the training corpus and the trained filters. By not releasing these, and focusing on ethics, they are trying to avert attention from the mistake of training a massive cyclops on unfiltered internet text. They probably read a lot of embarrassing text they didn't share, imagine that.

0. https://github.com/openai/gpt-2/


<quote>Precisely because everyone knows about Photoshop.</quote> Everyone?! All the non-digital-aware people on this planet, which I assume is the majority, may have never heard about Photoshop, or know what can be done with it. Out of the remaining digital-aware human beings who may well know about Photoshop, how many think about it when seeing a photo. I bet it's a very tiny percentage.


Why is there so much focus on just the language generation aspect of this model? The language generation examples are to show how well the model generalized the language it's trained on, but the whole point of releasing the pretrained model is that this model also:

>achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training

Every comment I see here and else where is strictly about automatically generating comments, but the real loss of having this model kept private is that it would allow anyone to create very, very good NLP projects right away.

The big problem in ML/"AI" now and going forward is that there is an extreme asymmetry emerging in tech. The start of the last major tech boom was because of "disruption". Any clever hacker with a laptop and developer skills could create a product that would topple a giant. Likewise anyone could learn to do anything that needed to be done on their own computer in their own time. Now that's changing, to even play with these new tools you need special access and corporate support.

Releasing the pretrained model to the public would allow a range of groups that cannot afford to build a model such as this to experiment on their own and achieve state of the art NLP results, not just in language generation but in classification, translation, summarization etc. Some kid in the middle of know where could download this and start messing with ideas about building a chat bot.

The threat is not that some rogue actor will create evil spam bots, but that some small scrappy startup would leverage the same, currently exclusive tools, of large technology companies.

The reason why democratizing AI is important is not because we can better fight off some vague threat, but so that, at least for the near term, individuals and small groups can still reasonably compete with major players in the ML space. And of course the fact that someone who claims openness and that they exist to "ensure AGI's benefits are as widely and evenly distributed as possible." Can arbitrarily choose not to do go the other direction just shows that tech is moving in a direction where "disruption" is going to become increasingly less possible, and the future of the industry is going to be controlled by a handful of major players.


What's extremely bad for research is that they didn't even release their train and test sets. It's not hard to grab 40GB of text from the web that comes from links with at least +3 votes on reddit. But even if you do that, you won't get the same train/test set, and the same split. So impossible to know if a model is outperforming their model.


They tested against standard language modeling benchmarks like PTB and WikiText, so it's entirely possible to compare.


Network architectures and trainers are improving quickly so it won't be long before others can create a similar model.

Their ace is having a huge clean dataset of grammatically correct text. This is not about the fine points of grammar, but that the content of the average web page is practically word salad.


The fact that this thread is in the site is scary, seems like there are parties interested in a public outcry to make some potentially very harmful tools be released to the public... the question is, why?


  A little learning is a dangerous thing; 
  drink deep, or taste not the Pierian spring: 
  there shallow draughts intoxicate the brain, 
  and drinking largely sobers us again.


This is very nice summary of lot of social media discussion that has been happening. Very thoughtful arguments from people who actually work on this stuff.


The author doesnt mention the FUD that is being spread through tech media. "AI company is so afraid of its creation, it won't release its source code". Good job helping people understand their AI future. This is a PR stunt. Come on, a little extra spam is not shocking news, and whatever can be used to spread "fake news" can also be used to spread "correct news". Whatever the PR stunt they tried to pull, it is creating a bad narrative for AI.


Not releasing the model seems to me to be entirely a PR stunt, and has made me lose a lot of respect for OpenAI.


@OpenAI - please, research reputations systems and authorship identification, not fake news generators.


Are they planning to change their name to ClosedAI?


release the code! you cant stop the signal


The code is out (https://github.com/openai/gpt-2), just not the larger model itself.


Saying 'the code is out' is a little misleading. It's some supporting code for working with a trained model, and a model definition of what is already a standard and widely-implemented architecture. It doesn't include the dataset, the code used to construct the dataset, or perhaps most importantly of all, the code to train a model on a fleet of TPUv3s.


They already released the code and that wasn’t enough for people.


It's like releasing a car without giving access to the specific fuel needed to run it.


The smaller model is out so you can test drive it, but you won't be able to hit 100mph.


This is the best publicity they could ever have and it didn't cost them a penny.

There's probably 10 year olds in their bedrooms light years ahead of these guys, certainly from we've seen it's definitely nothing to write home about, but it's all about the marketing and hype, convincing the mass media they've invented the next Terminator.


> There's probably [amateurs] light years ahead of these guys

This is not an 80s hacker movie and amateur hackers can get into the DoD computers using a public phone or whatever. This is actual research funded with millions of dollars done by people with years of experience in the field. I find this whole meme in the programming world of some random programmer being able to work miracles and be super advanced kind of dumb.


Is it a meme in the programming world? I've only encountered it in Hollywood and TV. It's like how every on-screen computer beeps and boops on every interaction: it's a dramatic device. No one does that IRL because it would drive you bananas in like ten seconds.

Cf. Glyph's "The Television Writer's Guide to Cryptography" https://glyph.twistedmatrix.com/2009/01/television-writer-gu...


Yeah, I've had a few CS friends and internet people say things like that, like there is probably some programmer who solved the NP problem but no one knows it and of course there's the elite hacker who can do anything he wants thing; not really seriously but enough that it seems to be a meme in the CS world.


it is incremental progress though, not some entirely new model. a 10 year old with a rich dad could learn to train it (and probably will)


This is crazy. It is a substantial step forward for NLP and basically every researcher/important person in the field agrees.

And hype for what? They aren't selling anything.


they are selling hype. i also see it as a jab to fb/google "look boys, we re gonna keep this nuclear bomb from filling your databases with spam"


1) What really bothered me personally about GPT2 is that they made it look sciency by putting out a paper that looks like other scientific papers -- but then they undermine a key aspect of science: reproducability/verifiability. I struggle to believe 'science' that cannot be verified/replicated.

2) In addition to this, they stand on the shoulder of giants and profit from a long tradition of researchers and even companies making their data and tools available. but "open"AI chose to go down a different path.

3) which makes me wonder what they are trying to add to the discussion? the discussion about the dangers of AI is fully ongoing. by not releasing background info also means that openAI is not contributing to how dangerous AI can be approached. openAI might or might not have a model that is closer to some worrisome threshold. but we don't know for sure. so imv, what openAI primarily brought to the discussion are some vague fears against technological progress -- which doesn't help anyone.


Re 1: GPT2 is no different from most stuffs by DeepMind. DeepMind, in general, does not release code, data, or model. DeepMind does not seem to get reproducibility complaints, supposedly "key aspect of science".




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: