- ML is getting more powerful and will continue to do so as time goes by. While this point of view is not unanimously held by the AI community, it is also not particularly controversial.
- If you accept the above, then the current AI norm of "publish everything always" will have to change
- The _whole point_ is that our model is not special and that other people can reproduce and improve upon what we did. We hope that when they do so, they too will reflect about the consequences of releasing their very powerful text generation models.
- I suggest going over some of the samples generated by the model. Many people react quite strongly, e.g., https://twitter.com/justkelly_ok/status/1096111155469180928.
- It is true that some media headlines presented our nonpublishing of the model as "OpenAI's model is too dangerous to be published out of world-taking-over concerns". We don't endorse this framing, and if you read our blog post (or even in most cases the actual content of the news stories), you'll see that we don't claim this at all -- we say instead that this is just an early test case, we're concerned about language models more generally, and we're running an experiment.
Finally, despite the way the news cycle has played out, and despite the degree of polarized response (and the huge range of arguments for and against our decision), we feel we made the right call, even if it wasn't an easy one to make.
If this is your whole point, then I think you are missing something fundamental. Implementing these models doesn't require reflection, or introspection, or any sort of ethical or moral character whatsoever; and even if it did, all that will happen eventually is someone (without the technical background) will simply throw a lot of money at someone else (with the technical background, but who needs to, you know, eat, and pay rent, and so on) to implement it. You are fooling yourself if you think your stance makes a single mote of difference in this arms race.
In fairness, if that's true, then no one has any need of her model.
More seriously speaking, why does anyone need, say, "training set x", or "model y", to make their implementation work? You don't. So I don't really understand why everyone is so worked up about not releasing this stuff? If you want to do it, do it. If not, don't. But there's no need to say, "I demand everyone do it, and I'll have a meltdown if they don't."
- If they are going to publish the research, and want to claim it as research (which they will, either by submitting it to a conference or putting on arxiv for the citations), then they should publish the supporting material, because without the supporting material it is impossible for reviewers or other researchers to evaluate. This is not just the model--they are also not publishing the training code or the dataset.
In short, they want to have it both ways, by having their work accepted as scientific research, yet providing absolutely no way of determining if the results are reproducible. That is a horrible, horrible standard. (other companies are guilty of this as well, btw). I mean, think about how absurd it is that they are saying "our scientific results are too good to publish. Trust us." Why is this acceptable? because it sure as hell wouldn't be acceptable if was a random person releasing a paper claiming incredible accomplishments, yet they provided absolutely no evidence.
- The other criticism is that the justification for why they aren't publishing (which is that they are too concerned with the moral and ethical implications of their work) is, well, a load of crap. They aren't doing anything to contribute to the ethical or moral use of these tools by doing this and they aren't slowing research into the area one bit. If they really wanted to have an impact here they should have just not said anything (but of course, then the authors couldn't put this on their resume...).
Whether they are releasing the model is not the issue, own its own, and I don't think anyone is throwing a fit because someone doesn't release their model. It's the _why_ and the implications that bother people.
OpenAI are extremely sensible to draw attention to the fact that AI is approaching a boundary that has practical implications. It is good that everyone is being alerted that that boundary might be crossed at any time in the foreseeable future.
Now the novelty is that this can be better targeted. But even simple Markov-chain based text generators were good enough to fool people for a bit.
And there was always people that had too much free time to write. A lot. (See for example the crackpots and conspiracy theorists that bombard physics forums. See the 9/11, Zeitgeists, etc. movies. See how much has been written about anti-vaxx, about quantum woo, etc.)
Reputation systems work pretty well for countering spammers.
And against APTs (advanced persistent threats, spearfishing attacks, etc) there's no real "universal" protection anyways. (You need a competent security team to out think and out resource the attackers in every possible dimension.)
This AI is the same as the paid Russian trolls and the unpaid scammers, and so on.
I agree with your last point though - it falls into the same category as paid Russian trolls. I think that's exactly why they were hesitant to release the pre-trained models - they didn't want to make it easier/cheaper for a bad actor to replicate the 2016 election.
It remains to be seen whether their decision will make an iota of a difference. But I understand their motivation.
I work in this field, and yes, this is very novel (at least in terms of the quality).
It's the biggest improvement in quality I've ever seen. The long term coherence is so much better than anything else that has ever been built.
Yes, it can serve as great customized propaganda generator, and yes, people can be spin 'round and 'round with it. But they can be already with pretty much anything, from the simplest of phrases from "make X great again" to the elaborate scams of new age bullshit.
I simply disagree on the "virulence" or weaponization factor of this with others. (Especially when it comes to the possible "defenses", none can be "deployed" in 6 months. You can't teach critical thinking to billions of people overnight.)
I don't have a strong opinion about if they should have released this model or not.
I do know it would make a great commercial spam generator though. Want a million product reviews which seem legitimate quickly? This is the thing..
Conjecture: GPT-2 trained on reddit comments could pass a "comment turing test", where the average person couldn't distinguish whether a comment is bot or human with better than, say, 60% accuracy.
At this stage, these AI's can only help. Imagine we are given this tool that can generate samples from the "uninformative but realistic looking text" distribution, we can then put it in a discriminator to filter out blabbering bots and humans together, or invert it to summarize the small kernel of information, and that would be a great thing. The better these models learn about typical human behavior the better off we are at identifying the truly exceptional. It's when AI starts to sense and incorporate novel information from the non-human environment that you really have to worry.
Perhaps, but that's the world we live in. I suspect the average reddit commenter is already more articulate than the average person (citation needed, I know. But reddit skews highly educated young male in a first-world country. There's no way they do worse than a worldwide average).
Other than that, I agree with your comment.
But it's no symbolic reasoning. It's not constructing a counter-argument from your argument. It simply lives off previous epic rap battles of internet flamewar history about .. well, about anything, since it's the Internet, and people like to chat, talk, write essays on every topic there is. Satire too. So there is always something to build that lang model on.
Though that will come too. Eventually.
There is no real profit to be made by generating realistic looking text. Spammers don't work that way, spammers haven't cared about realistic looking text for years. Nor have spam filters cared much about text for a long time, exactly because it's so easy to randomise. Anti-spam is not a good reason to hold back on language generation models, in my view.
As for HN, if bots can write posts as good as humans, great, why hold back?
People are not trivial automatons who can have their opinions rewritten on the fly by auto-generated text. If auto-generated text reaches into its giant grab-bag of learned expressions and produces something actually interesting or insightful, people might be interested in that new line of thinking, but if - like many of these examples - it's essentially rambling if coherent nonsense then it won't have any impact at all.
So I rather think it's you fooling yourself. You've been reading comments online for years without knowing who or what produced them. If you discovered half of them were artificial tomorrow, what difference would it make? The people around you are already judging arguments based on the content, not their volume or who wrote them.
It seem that as far as information warfare goes “less is more” works quite well and they rely on targeted people to spread the news for them.
When you want to drive an agenda you don’t need unique 100,000 comments you need a good copy pasta.
Overall I’m sick of this dramatization of the AI catastrophe until there will be a proven path with agency for it to actually operate in the real world.
A chat bot isn’t a threat to anyone even if it turns homicidle.
Regarding an agenda, sure, good pasta is fine and all, and regular ol people are fine, but it is not cost effective. This is a million times cheaper, which means you can use it everywhere, not just the obvious places, you can be everywhere, and you can do more than just push a couple big items, you could push tens of thousands of them, micro targeted all the way down to the individual. Don’t dismiss it so easy—the potential scale is far, far larger than anything existing to date.
And I would note that the reason 100,000 comments aren’t effective now is precisely because they are too formulaic, too obviously fake when used on such a large scale. This has the potential to create real, live, seemingly active and believable online communities of millions of people, all at fractions and fractions and fractions of a penny compared to current methods. People read news, then comments (or reviews or whatever), because they use them to determine the validity of the content they just read; if it’s no longer possible to tell from the comments what’s a scam and what isn’t... well, you could do a lot of things with that.
I’m also confused by the threat models earnestly put forth in your blog post. Are we really concerned about deep faking someone’s writing? The plain word already demands attribution by default: we look for an avatar, a handle, a domain name to prove the person actually said this.
It seems more like the "nukes are safer when multiple rational state level actors have them", rather than anyone able to pull a git repo.
Regardless, I agree with TFA that this is a silly and arbitrary time to yell “fire.” It’s PR.
True. On the PR side though, it'd be incredibly hard to say "we want to make replication moderately difficult, but not too difficult." Everyone would end up arguing exactly how much should be released, how it would prevent X,Y,Z folks from contributing to AI, etc.
> Regardless, I agree with TFA that this is a silly and arbitrary time to yell “fire.” It’s PR.
Alternatively, it does provide good insight into the reactions in the community as a whole, and continues the conversation on exactly how much should be released. Maybe I'm not far enough into the ML community, but the decision not to put the "keys to the kingdom" on github for every script kiddie to weaponize seems reasonable to me, especially as a precedent.
Mostly it's scary not because it's good - as writing goes, it's quite bad. It forms coherent sentences, but otherwise it's nonsense. I've seen similar nonsense producers in early 90s on basis of Markov chains and what not.
No, the scary part is how much it reminds me of what I am reading in the media all the time. My current pet concern is that AIs will start passing the Turing test not because AIs are getting so good but because humans are getting so bad. A bunch of nonsensical drivel can easily be passed as a thoughtful analysis or a deep critical think-piece - and that's not my conjecture, have been repeatedly proven by submitting such drivel to various academic journals and it being accepted and published. I'm not saying people are losing critical thinking skills - but they are definitely losing (or maybe never even had?) the habit of consistently applying them.
Exactly. When it comes to generating a large volume of apparently-good sentences, non-AI (or classical) approaches are still better than good. Those will be equally disruptive, since the defending side is yet to develop a proper countermeasure based on the "sensible"-ness of content. Plus, they will be much easier to customize and adapt to the situation, while ML-based solutions often need remodeling and retraining when repurposed.
> My current pet concern is that AIs will start passing the Turing test not because AIs are getting so good but because humans are getting so bad
AI will start deceiving the public even before it pass Turing test. It's much harder to spot bots amidst people than in a 1vs1 chatroom.
Can you cite your source? I find this hard to believe.
Only people with a large amount of money and a lot of expertise. What you are doing is the opposite of democratizing AI.
Yet from Google we heard nothing. Which is the optimal decision for them - they only lose by blowing the whistle.
This would be ok if this is the first time that anyone had a media go wild over AI story. But actually this has happened 10000 times this year already.
That much is nothing out of the ordinary. It is interesting (at least to those of us who aren't natural language researchers) so why shouldn't we talk about it? Why shouldn't journalists write about it?
Inevitably their mildly controversial decision to hold some data back got a lot of people discussing whether it was necessary. Which is also perfectly okay.
So, in the end, the complaint is just about why people don't have smarter takes on things. I don't know what to tell you; that's just how social media works sometimes.
The distortion of public debate caused by community exclusiveness on social platforms, by the curation and manipulation of social feeds and by the dynamics of online debate where the loudest and angriest voices dominate is one place that we could do with some focus.
Another place is the management of simple models - plain Jane stuff like a learned classifier - people are making these with Python and R and releasing them into infrastructures and apps and we don't know what they are and where they are and how they are interacting.
Instead we have wizard of oz style stories to distract us from who's actually hiding behind the curtain. If we fall for this then we may find ourselves living in a vicious totalitarian society with no obvious way out of it.
Journalists should write about it in an informed and professional way, that's fine. But they need to write about stories that are impactful and important, and if they were to write about this one in this way ("text scrambler makes a pretty good paragraph one out of 30 tries, has no idea of what is going on") they would get no clicks (there will now be a second wave of follow-ups like that to ride on the coattails of the story). Instead they have to make it sound like robots are going to take children from schools and experiment on them live on TV, and this makes them famous and rich.
There is no real revision of the story because the follow on stories disappear from view while search engines and other journalists use the original hysteria. Look at what happened with the two negotiating bots at facebook (the game was to negotiate to get books and balls, the bots tended to use a short hand to negotiate rather than the english they were trained on) This was "Facebook researchers have to pull the plug on AI that they no longer understand", and that is the narrative that we will have on that story more or less forever.
>Recycling is NOT good for the world.
>It is bad for the environment,
>it is bad for our health,
>and it is bad for our economy.
>Recycling is not good for the environment.
>Recycling is not good for our health.
>Recycling is bad for our economy.
>Recycling is not good for our nation.
The first paragraph keeps repeating the <X> is <bad | not good> for the <Y> pattern 8 times.
>And THAT is why we need to |get back to basics| and |get back to basics| in our recycling efforts.
"get back to the basics" is repeated twice in the same sentence.
>Everything from the raw materials (wood, cardboard, paper, etc.),
>to the reagents (dyes, solvents, etc.)
>to the printing equipment (chemicals, glue, paper, ink, etc.),
>to the packaging,
>to the packaging materials (mercury, chemicals, etc.)
>to the processing equipment (heating, cooling, etc.),
>to the packaging materials,
>to the packaging materials that are shipped overseas and
>to the packaging materials that are used in the United States.
It literally repeated packaging 5 times in the same sentence and the overall structure was repeated 9 times. Also what type of packaging is based on mercury?
That is the saddest part. It's not because AI is good, it's because we count saying "X is good/bad" 3 times as a persuasive argument. It won't be hard to learn this kind of "arguing", it's just sad that's what we're teaching our AIs to do and get excited when they do it.
I didn't say that it's a persuasive argument, I said that it can be persuasive IN arguments. There's nothing sad about an AI learning it, or people being happy with it, it's very impressive.
(This of course doesn't make it an amazing feat of computer engineering.)
The overarching narrative is great, but that's probably driven by the great antithesis supplied by the experimenter.
It'd be interesting to know how this works, what happens if less or more is given as thesis/antithesis/assignment, and after how much output it turns into gibberish (or repeats).
Heck, maybe having to compete with this will raise human discourse (Joking).
Have you done a plagiarism search on that text to see how similar it is to the input corpus? I'm by no means an ML expert, but I've played around with models for random name generation and one thing I've noticed is that as the models become more accurate, they also become much more likely to just regurgitate existing names verbatim. So if you search the list of names and notice something that seems particularly realistic, it could be because it's literally taken in whole or in part from the training data set!
(The talking unicorn example on their page is also meant to demonstrate that, no, it's not just memorizing, but I think it's a bit more compelling to check from the raw samples)
How is that open?
How is that not centralization of power?
Here are a few that comes to mind.
-Secrecy? but how will you continue to exist on the PR scene if you don't release anything?
-Are you willing to pay every developer who is able to replicate your paper, more than what the black market would pay?
-How are you working on incentive alignment to make sure that all people who can replicate your results have more incentive to do good than bad, specially in the current environment where users and valuable data are silo-ed by a few companies?
-Misdirection to keep an edge, i.e. planting bugs/ Not fixing bugs for public ; spreading false results; only working on problems that need high resources to limit the number of actor who will be able to replicate ?
-Tracking the people who have the competence to replicate and take preemptive measures.
-Restrictions on GPU/CPU/silicone wafer.
Who can regulate? How can we regulate? What are the negative consequence of regulation? What happens if we don't, at what odds and time horizon?
That said, withholding the pretrained models probably won't make much difference, because bad actors with resources (e.g., certain governments) will be able to produce similar or better results relatively quickly.
All it will take is (1) one or two knowledgeable people with the willingness to tinker, (2) a budget in the hundreds of thousands to a few millions of dollars at most, and (3) a few months to a year. Nowadays a lot of people are familiar with Transformers and constructing and training models across multiple GPUs.
Ok, accepting that premise, what people/organisations would you share the research with and based on what criteria?
One of the reason Elon distanced himself because of what OpenAI team wanted to do. I am wondering if this new paper has anything to do with that? Or what it is in general that Elon doesn't agree with what OpenAI is doing?
This is the worst headlines in this matter. This is one of the leading media in India. A language model being touted as Fake news AI tool. This is like calling a car, A run over machine by Ford.
That's a great dysphemism. Gonna start using that.
So for the Ford analogy to be apt, Ford would have to have designed a car nobody has ever seen, and released a video which is basically just hundreds of hours of the car running people over.
I mean, a car has lots of well understood non-running-people-over capabilities. But have they demonstrated that this model is useful for anything other than generating fake news-sounding spam text?
Like many, I was viscerally shocked that the results were possible, the potential to further wreck the Internet seemed obvious, and an extra six months for security actors to prepare a response seemed like normal good disclosure practice. OpenAI warned everyone of an “exploit” in which text humans can trust to be human-generated, and then announced they would hold off on publishing the exploit code for 6 months. This is normal in computer security and I’m taken aback at how little the analogy seems to be appreciated.
Why? There were news about bots writing news ~5 years ago. Given a few simple facts the AI generated the regular info-scarce but fluffy news-piece.
Now OpenAI added better everything (better language models, more data, better "long-term memory" for overall text coherence), and we got better fluff.
It seems like a GAN and a simple Markov chain generator. (Even if it's not that simple of course.)
And maybe it's the equivalent of the "modern art meme" style transferred to AI/ML research. ( https://i.pinimg.com/236x/71/e1/21/71e12151f4b59d8433d32c126... )
What I'm trying to convey is that wrecking the net with auto-trolls was already possible, but for some reason Mechanical Turk was cheaper.
> OpenAI warned everyone of an “exploit” in which text humans can trust to be human-generated
Sokal already did that, and so did http://thatsmathematics.com/mathgen/ ... but of course this might be qualitatively different, because it can be targeted. (Weaponized, if you will.) But the defense/antidote is the same, but it takes a lot more than 6 months to make people better at critical thinking, but maybe you already heard about the difficulties of that :)
The strongest counterargument I've seen to OpenAI's decision is that the decision won't end up mattering, because someone else will eventually replicate the work and publish a similar model. But it still seems like a reasonable choice on OpenAI's part–they're warning us that some language model will soon be good enough for malicious use (e.g. large-scale astroturfing/spam), but they're deciding it won't be theirs (and giving the public a chance to prepare).
The lead policy analyst at OpenAI has already tried to engage the community in discussing the malicious use of AI, on many occasions, including this extremely well-researched piece with input from many experts: https://maliciousaireport.com/ . But until OpenAI actually published examples, the conversation didn't really start.
In the end, there's no right answer - both releasing the model, and not releasing the model, have downsides. But we need a respectful and informed discussion about AI research norms. I've written more detailed thoughts here: https://www.fast.ai/2019/02/15/openai-gp2/
This seems to be a particularly weak argument to make. How is their model going to impersonate someone in a way that a human can not?
This new AI could help them with that. They can let go of the paid writers and hire an IT guy/gal to operate the bots - and the VPNs. (Or they can just pay a lot less to the paid trolls just for their home ADSL/Cable/4G connection.)
But so far this AI is not going to pass a Turing test. Sure, maybe it can be integrated with a chatbot. And it'll be interesting how internet communities will react.
Tesla does not even offer their full self driving package anymore. No coast to coast drive yet. Hard to say that's an amazing job.
OpenAI abandons their open source GitHub repos after a year, is now not releasing code, and is always in DeepMind's shadow. Alive, yes. Successful, no.
At the same time Andrej dropped out the idea of a fully learned end-to-end model (that's just impossible with the current deep learning technology), and started replacing the somewhat working heuristics with machine learning methodically one-by-one. Also he ramped up the data gathering pipeline.
He needs to build the full simulation, agent systems that can simulate other drivers/humans, implement reverse reinforcement learning...there's so much to do where Waymo is far ahead (but Tesla is ahead in data gathering).
Reading that piece gives me the same weird feeling as watching AlphaStar playing through a StarCraft game.
"All models still underfit WebText and held-out perplexity has as of yet improved given more training time."
and the previous paper
It's a transformer, not LSTM, and it's very large but not structured in a particularly unusual way.
Only when a human wants to fool a human, it impersonates whatever possible but a human, then suddenly charges a shitload of ape shit, and then behaves like it never happened.
Without a decent natural language translation or automatic reasoning, which they have not, looks like N-gram where N equals to number of words in language corpus.
Secondly, have you seen the results? I was dumbfounded and fascinated. I spent hours reading the samples.
Maybe I'm just out of the loop and this truly isn't anything significant, but then that only proves that OpenAI was successful: Now I am aware of the latest advances in NLP and hopefully so too are many more.
Yes, I've seen the result. They're nice but, as the article points out, not extraordinary compared to state of the art, open NLP research.
OpenAI's behaviour here smells of Gibsonesque 'anti-marketing', using the misunderstanding of AI and its capabilities in the general population as a means to stir up publicity for their organisation.
This is unethical, misrepresents progress in the field, and produces confusion in the press.
> misrepresents progress in the field
Can you point me to some examples of unsupervised learning with similar results? Not asking for rhetorical purposes; I just genuinely was shocked by how compelling their results were, especially given this was unsupervised.
> OpenAI's behaviour here smells of Gibsonesque 'anti-marketing'
I don't disagree that the ethics are questionable, but I think it's highly speculative to suggest that they didn't release the full model purely as a marketing ploy (I'm assuming this is the main objection to their marketing "tactics"). As you say, it "smells" this way, but I fail to see how it's really so clear-cut.
Model wise this is just openAI's GPT with some very slight modification (laid out in the paper).
Ilya has now commented in the thread and essentially made the same point, this is state of the art performance, but reproducible by everyone because it uses a known architecture.
The secrecy and controversy makes no sense if the model is open, even the methodology of data collection is laid out. There is no safety here assuming that anybody who wants to rebuild the model can do so simply by putting enough effort into rebuilding the dataset, which is not an issue for a seriously malicious actor.
> The secrecy and controversy makes no sense if the model is open, even the methodology of data collection is laid out.
This is exactly why I found the results so compelling: It suggests that this technology is already accessible to some big players: The odds that a Big Corp. or govt agency has already begun using the technology are high, which is precisely why the public needs to start thinking about it.
I cannot know exactly why OpenAI chose to withhold the model, especially given how easy it would be to recreate, but even if we assume that OpenAI withheld the full model purely to drum up controversy, the controversy is justified, as it's very likely that this technology is already in the hands of a few big players.
If they did, I bet it would be used for automated "troll farms".
Like weaponized malicious ELIZA, it would have fake user profiles reacting to keywords, spinning suitable counter-argumentation and/or lies for as long as it takes to change opinions and perceptions, relentlessly, day and night.
This isn't my impression.
It's not the best in many domains but it a single network that is moderately decent in many domains. You can use it to summarize by adding TLDR to the end of text. You can use it to translate by listing previous translations. And of course it does blow away any state-of-the-art RNN text generation I've seen. RNNs tend to fall apart after 1 or 2 sentences where as this holds it together for multiple paragraphs.
I'm trying to get a sense of just how quickly things have been advancing. I read a few NLP white papers about a year ago and never saw anything as compelling as this, but I am definitely an outsider possibly on the left hand side of the Dunning Kruger graph...
I wouldn't be so sure about that. Take their reporting of Charlottesville events and Trump's comments about them. Here's what Trump _actually_ said: https://twitter.com/ZiaErica/status/1096572062196486144. Pretty reasonable point of view, all things considered. What was NYTimes "reporting"? That Trump is "defending white supremacists", of course. Don't believe me? See for yourself: https://www.google.com/search?q=trump+charlottesville+nytime.... Why was NYTimes doing that? It's either deliberate malice or incompetence, both of which would make NYTimes quite friendly to automatically generated fake news as long as they fit their narrative.
But there's a bigger issue with all of this. When people see this tech, they immediately think that it'll be used to generate fake news (which it will be, to be sure). BUT, it could also be used to do the exact opposite: take facts and summarize them without agenda-driven omissions, without "reading minds" or inventing "sources" "familiar with" someone's "thinking", or passing off uncorroborated dossiers or book chapters as gospel truth.
Pretty dumb and disrespectful to politicize a blog post about OpenAI.