Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] AI text generator not released for concerns about implications (openai.com)
63 points by HuangYuSan 70 days ago | hide | past | web | favorite | 50 comments




Hmmm....

> As the above samples show, our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text.

OK, let's look at the sample that's displaying by default:

> System Prompt (human-written): Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

> Model Completion (machine-written, first try):

> “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.

> [Aragorn says something]

> “I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it.

This is not "close to human quality". It's terrible. Gimli kills an orc in battle... without taking part in the battle. It takes two words before the opponents (as opposed to, say, the battlefield) are reduced to a "blood-soaked quagmire", but the battle lasts for hours after that. After which two orcs lay defeated and lifeless for miles and miles.

This isn't even coherent from one sentence to the next. And paragraph three directly contradicts paragraph one. And Gimli calls Legolas a dwarf!


This is pretty directly addressed right after what you quoted:

> As the above samples show, our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text. Nevertheless, we have observed various failure modes, such as repetitive text, world modeling failures (e.g. the model sometimes writes about fires happening under water), and unnatural topic switching. Exploring these types of weaknesses of language models is an active area of research in the natural language processing community.

The authors go on to discuss more limitations (for example, the dataset doesn’t contain much outside of LOtR and some celebrities). I imagine that what the authors call “coherence” is weaker than what you are referring to (the AI is not necessarily telling a story, but it stays on the same topic / characters).

I still think that the result is incredibly impressive and powerful. You could start with this as a sort of English “noise”, and then run the result through a parser. This would allow you to add some “hard coded” world modeling or constraints. Ex: Maybe you could mix in sentiment analysis and reject some sentences to roughly control the narrative.


> I still think that the result is incredibly impressive and powerful.

I agree in a way that I suspect is much more specific than what you have in mind. This system is managing to produce a lot of text which is not heavily constrained, and what it produces is generally grammatical English. That is impressive; in the past, producing grammatical text meant very tight restrictions on what it was possible to say, making "text generators" little more than prerecorded phone tree messages.

But this model clearly doesn't know the meaning of anything it writes, and therefore can't produce anything better than obvious nonsense. This is true of some humans too -- it is a very serious condition known as Wernicke's aphasia ( https://en.wikipedia.org/wiki/Receptive_aphasia ):

> Patients with Wernicke's aphasia demonstrate fluent speech, which is characterized by typical speech rate, intact syntactic abilities, and effortless speech output. Writing often reflects speech in that it tends to lack content or meaning.

Obviously, those suffering from Wernicke's aphasia are not able to function in society, since they effectively can't say or understand anything. I don't think matching the performance of humans who have mental deficiencies so serious that they are unable to function really counts as being "close to human quality".

> I imagine that what the authors call “coherence” is weaker than what you are referring to

I had two specific things in mind as "coherence" failures:

- Gimli kills an orc, and then is said to have not taken part in the battle.

- The sentence "When they finally stopped, they lay defeated and lifeless for miles and miles." In context, the referent of "they" can only be the two orcs that attempted to overwhelm Aragorn. But it isn't possible for two dead orcs to cover "miles and miles" of terrain. If this had been written by a human, I would assume that what the writer had in mind, but failed to achieve, was to use "they" to refer to everyone taking part in the battle; I can't really make that assumption here. That sentence needs to use nouns, not pronouns, because its context doesn't allow for the pronouns.


Huh. Likening current NN limitations to aphasia is actually a brilliant insight.


I'm an impatient reader and I skip parts I think don't matter to the story, like what exactly happens in a fight, or descriptions of clothing. I didn't notice any of the errors you mentioned.

For example, on the «“You are in good hands, dwarf,” said Gimli» part, I pattern-matched to [boisterous protagonist remark] when I saw the opening quote, and skipped until after the dot.

My point is: to a reader like me, this "filler" (that's not the right word, but you get what I mean) could be machine-generated and I would barely notice it. I guess an author could concentrate on writing the "important parts" and let the machine "fill up the gaps".


That filler is in there for other audience members. I think of clothing descriptions as filler too, but I remember Brandon Sanderson mentioning how female draft readers for Mistborn kept objecting to him that he wasn't going into enough detail about what the protagonist was wearing.

You may not notice that text you didn't want to read anyway is just random self-contradicting gibberish, but someone wanted to read that part of it, and they will notice.


Perhaps Gimli is talking to himself reassuringly before battle

I kid, but the human mind has this extreme capacity for filling in the blanks and re-adjusting plain contradictions into something coherent.

It's a little bit how I'm able to imagine these epic stories from Dwarf Fortress' Legends mode (unfortunately can't provide any relevant links right now).


This is basically indistinguishable from my own level of coherence.


This bit is also confusing.

> “You are in good hands, dwarf,” said Gimli

The line reads as though they are talking to a dwarf, when actually Gimli is the dwarf.


The best weapon against centralisation and control by the few is publication and distribution.

Others are going to do it, others will replicate the work, the best defence is getting it out there so we can understand it and learn how to counter it.

“Open” AI indeed.


Yep, you can't put the genie back in the bottle.

If you successfully tested a nuclear weapon it would be incredibly naive to think you could save the world from nuclear weapons just by keeping its implementation secret, someone else out there is just as smart as you and once you've proven it's possible then it's just a matter of time before someone else figures out how you did it.

Although personally I don't think their creation is anywhere near as dangerous as they would like to think, feels more like a PR stunt/dystopia LARPing.


Yes, indeed, I myself have several large nuclear weapons at home which completely validates your point.


There is always that boy who built a nuclear reactor at home. Not quite having nukes besides the wine bottles in the cellar but he overcame at least some of the problems you'd face to get there, that shouldn't be doable

https://en.m.wikipedia.org/wiki/David_Hahn


> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.

i wish people would stop pretending that there is some good way to bring this technology into existence. yes, its nice to try and let the good guys use it first but its just irrelevant in the long-term. ultimately the result is going to be total proliferation of this technology in all areas where it has utility, and it will be used to maximum extent in every application it is suitable for, including the really bad ones. the roll-out will make the transition smoother but it wont change whats actually important: the end result on the lives of our grandchildren.

growing up around rapidly advancing technology, i thought of technology as a double-edged sword: it cuts equally in both directions. but after thinking about it for a long time, i now believe that, in relation to human well-being, the presence of a given technology or combination of technologies can be a net positive or a net negative as well as neither. we need to think more carefully before letting these genies out of their bottles.

this is not an example that i think will be very negative, but its very powerful and unexpected for me at least. the next powerful and unexpected thing may not be benign. banning development of these kinds of technologies should not be off the table.

after reading this: https://blog.openai.com/better-language-models/#sample8 and browsing reddit for a while, i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.


>i have realized that from now on i cannot trust 90% of the comments i read on reddit. this is insane.

I hate to be cynical here but I'm glad this has made you realize something that's been true since the Internet started; you shouldn't trust what's written on any forum! Be skeptical.


I agree with being skeptical but there is a point where rational skepticism just turns into pure cynicism and that's not productive.

Also, with the advances in deep fakes and synthetic video...how long before you can't trust video evidence either?


Got news for you! It's already possible to produce fairly accurate fake video of people saying things, in their own voice

https://www.youtube.com/watch?v=cQ54GDm1eL0


Hmm, video evidence may be trustworthy - eg. video from a CCTV system. Perhaps it could be written onto some write-once, tamper resistant format? Not sure how that would look.

I suppose one place to start thinking about this would be photos. Are photos admissible evidence or do courts only allow negatives? Photos have been modified for a very long time. This is probably the most famous example: https://amp.businessinsider.com/images/52af668569bedd3b2643d...

But yes, I am far more worried about how much more effective fake news will be once they start coming with actual videos.


Cctv on blockchain?


How long? Several years ago. Videos have been faked forever. There’s been all sorts of optical illusions, forced perspective, and special effects for 100 years.

You shouldn’t trust any single source. Only a preponderance. Even then be open to skeptics


i meant that i cant trust whether or not it was written by a human, not whether or not i can trust the correctness of the comment. edited for clarity.


Trust and skepticism aren't binary.


Interesting, would you elaborate? Do you perhaps mean that even if you do trust someone, you should still be skeptical?


Someone's skepticism, knowledge, and careful assessment might lead them to think that a forum post has a X% of being machine generated (as one example scenario). There are big differences between values of 0.1%, 1%, 10%, 50%, 90%, etc, and the resulting impact of the who are involved in that system.

Because of this, it isn't helpful to say, "Oh you should always be skeptical! It doesn't matter if things have changed significantly such that we have more reason to be skeptical now."


That makes sense, thank you!


"Trust, but verify"


> i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.

Ever since Photoshop got good (20+ years now?) we haven't been able to assume that images are "real" either and things turned out fine. We'll have to learn to be skeptical.

Anyway Reddit already has dedicated bots (account names ending in "SS") posting and commenting on their own content, mostly hilarious but sometimes fairly "real" checkout /r/SubredditSimMeta

https://www.reddit.com/r/SubredditSimulator/comments/3g9ioz/...


I personally have huge concerns regarding the public global distribution of what is clearly a weapons grade technology. Authoritarian countries are already heavily invested in utilising these technologies for the purposes of suppressing the wills of their people.

However, there is nothing that will stop them from further developing these technologies even without access to the research from more liberal nations.

To halt development is to drop out of an arms race that cannot afford to be lost.


>and browsing reddit for a while, i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.

I wonder if eventually we'll have sites like reddit or forums that require you to demonstrate who you are before joining. Eg they require a photo of you and your passport. The site wouldn't use that information for anything, but this would reasonably guarantee that there's a real identity behind every poster.


Wow. I am curious how "specialized" the training was because those sample responses are beyond remarkable.

I think we're going to face a lingering question with AI. We're imminently reaching the point where AIs will be able to generate fake everything. In the near future (if not present!), I could be fake for all you know writing lots of otherwise coherent posts, only to secretly jam in some sort of agenda I've been programmed to advocate for. And there could be millions, billions, an unlimited number of "me". Or the latest hottest site trying to sell itself on its own 'buzz' could be full of millions of people actively engaging on the platform, except none of them actually exist.

So do we try to keep these AI systems secret, or do we make them widely available and rely on a rapid shift in public consciousness as a result? It's one thing to try to tell people to engage in sufficient scrutiny over text, images, audio, and increasingly even video. It's another when people see that such fakes are trivially produced by anyone.

I do realize that the 'chaos scenario' sounds... chaotic... to put it mildly, but I think the underlying issue here is that these tools will reach the public one way or the other. By keeping them secret the big difference is that the public will be less aware of the impact they're having, and the players operating such tools will be disproportionately made up of people trying to use them for malicious purposes - be that advertising, political influence, or whatever else.


I think the sooner this kind of thing gets out in the open, the sooner sites being propped up in valuation based on fake users today can change.

On another note, think back to how general people responded to things like Eliza and other chatbots, or the sims and other emergent storytelling games. What if there was a "social media" platform where all your connections were purposely AI, like your own personal TV sitcom/drama/comedy/whatever. Surroubd yourself with people who think like you that you can finally have "intelligent discussions" with without heated arguments from all the idiots who disagree. People love gossip; what if you had an endless supply from fake people? "You'll never believe what Frank said to Janice!" "I thought Bob and Alice would be together forever."


A social media that is part Her, part Westworld. I think that could be a multi billion dollar idea.


Arguably, this is the best and the worst idea I've ever heard on HN.


A novel by William Hertling titled "Kill Process" follows the story of a startup building such a social media. It is depicted in the positive light.


> And in her ears the little Seashells, the thimble radios tamped tight, and an electronic ocean of sound, of music and talk and music and talking coming in.

-- Ray Bradbury, "Fahrenheit 451" (1953)

> Having lived through the horrors of the pre-smartphone days

> the agonizing pain of [..] standing in line at a grocery store

-- HN, 2017 ( https://news.ycombinator.com/item?id=15912378 )


"intelligent discussions" with things that have the biases of their creators? no matter what you do that will filter in, guess we truly are heading towards an age where the engineer decides what society [will] think


I'm not saying it's a good idea, but from my experiences on social media (forums, mailing lists, chat rooms, and the modern facebook/twitter/etc), most people want an echo chamber, surrounded by people who can discuss whatever event but generally agree with their opinion.


What does fake even mean? The AI exists. What’s the difference between a human doing disinformation and an AI doing it?

You should already be skeptical of every comment. Looking for what it’s agenda is.

The cheapness and thus commality of AIs doing it will just wake people up to fact they should not have been so trusting all along.


The difference is between 50% online being disinformation and 99.999999% being disinformation as content is overrun with cheap bots. Think of what happened to Usenet except worse. Instead of forums being overrun with ads for V1agra, social sites with be overrun with fake members posting reasonable sounding stuff that is laced with coordinated tactical falsehoods.

I think eventually every piece of information will have to be digitally signed and our devices will by default limit what we're exposed to based upon whitelists.


You've more or less described online discourse right now.


The difference is that ai is scaleable.


We are lucky in that we can still read a comment such as yours and know it is a human. I think. Maybe not.


OK so...

So, they never say this is near flawless, or that it would fool you in a turing test. In some contexts though, it may be usable maliciously. It could spoof amazon reviews (as they mention), scalably fish for romance scam vicims, or sockpuppet political social media, harrass, manipulate or scale troll-farming to new levels or set up dates for you on tinder.

The point is that the ability to impersonate humans is troublesome, potentially. I don't think non-publicatin is an answer, but i do think the concern seems valid... to me.


This is just a hastily assembled excuse for not living up to the expectation created by their name. OpenAI have also failed to release their Dota 2 model which has absolutely no security implications, despite the fact that it cannot be properly tested without public release. OpenAI isn't.


Those implications not even look that scary to me.

Otoh I saw enough marketing fakes/mock ups to be skeptical on this one. For example my takeaway from OpenAI five was that the bots outmicrod the human players with little more to it.


This is utter BS. Dude you are not releasing your AI because you guys are scared that people will know for sure that you are fooling people for long on the name of AI.


Exactly. Either show your code or the story that AI can generate fake news is, itself, fake news.


Maybe when this genie is out of the bottle everywhere it will convince us to re-prioritize face to face communication and simpler lives with less computing.


Any computer program is an AI.

What matters is how intelligent they are along various axes.

Automatic programs have been surpassing humans on some dimensions for ages, but we keep insisting that they are not truly intelligent because they can't beat us along all axes. Throughput on simple logic tasks was the elephant in the room, and the scope of "simple" has been expanding at an exponential pace.

Now they are closing the gap or surpassing us on axes that were thought to be bastions of human cognition (TFA, and after chess and go, Google (Alpha Zero) recently beat two Starcraft 2 champions).

Freaking out (err... I mean "not releasing the full model") is understandable, but ultimately misguided as it will only delay the unavoidable... Unless the plan is to enact a global ban on AI research which I don't think is feasible anyway.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: