Hacker News new | past | comments | ask | show | jobs | submit login

> not extraordinary compared to state of the art, open NLP research

> misrepresents progress in the field

Can you point me to some examples of unsupervised learning with similar results? Not asking for rhetorical purposes; I just genuinely was shocked by how compelling their results were, especially given this was unsupervised.

> OpenAI's behaviour here smells of Gibsonesque 'anti-marketing'

I don't disagree that the ethics are questionable, but I think it's highly speculative to suggest that they didn't release the full model purely as a marketing ploy (I'm assuming this is the main objection to their marketing "tactics"). As you say, it "smells" this way, but I fail to see how it's really so clear-cut.




>Can you point me to some examples of unsupervised learning with similar results? Not asking for rhetorical purposes; I just genuinely was shocked by how compelling their results were, especially given this was unsupervised.

Model wise this is just openAI's GPT with some very slight modification (laid out in the paper).

Ilya has now commented in the thread and essentially made the same point, this is state of the art performance, but reproducible by everyone because it uses a known architecture.

The secrecy and controversy makes no sense if the model is open, even the methodology of data collection is laid out. There is no safety here assuming that anybody who wants to rebuild the model can do so simply by putting enough effort into rebuilding the dataset, which is not an issue for a seriously malicious actor.


> Model wise this is just openAI's GPT with some very slight modification (laid out in the paper).

> The secrecy and controversy makes no sense if the model is open, even the methodology of data collection is laid out.

This is exactly why I found the results so compelling: It suggests that this technology is already accessible to some big players: The odds that a Big Corp. or govt agency has already begun using the technology are high, which is precisely why the public needs to start thinking about it.

I cannot know exactly why OpenAI chose to withhold the model, especially given how easy it would be to recreate, but even if we assume that OpenAI withheld the full model purely to drum up controversy, the controversy is justified, as it's very likely that this technology is already in the hands of a few big players.


Interesting to think about whether state actors already have such technology.

If they did, I bet it would be used for automated "troll farms".

Like weaponized malicious ELIZA, it would have fake user profiles reacting to keywords, spinning suitable counter-argumentation and/or lies for as long as it takes to change opinions and perceptions, relentlessly, day and night.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: