Hacker News new | past | comments | ask | show | jobs | submit login

As mentioned in the other comment, this is a very deceitful comment. GPT-2 was made for generating text, and the specific thing it's dangerous for is disinformation bots on twitter/reddit as well as fake generated articles, two things the original model quite excels at. It's like claiming that nuclear bombs aren't dangerous because they're not good at taking us to space.



What I meant by state is status. As in, this type of humble retro styled application is where we find GPT-2 in 2020 instead of dominating the news cycle with what you claim it is so good at. If it excels at text, where are the fake articles and tweets? It's just not that good.


Have you been reading the mass comments/tweets on political stories lately? Some of it does seem bot-like to me. I don’t have a strong opinion about how much this is happening. But I know I could personally run a bot that generates political spam with GPT-2. I know a lot of people want to influence the political conversation. I mean, people get paid salaries to write comments online. So I can’t help but suspect that someone is using tools that allow it to be done a lot more cheaply.


Saying that it could be is wide use but you wouldn't be able to detect it is an impossible to falsify claim though.


The way to falsify it is to track down the identity of the human author of every internet comment. Simple!

To get philosophical I won’t say that I “know” it’s happening. Most of the world consists of things I don’t know about. The best I can do is build a mental model based on what I do know.


It does work, though. Here's an example: https://techscience.org/a/2019121801/ There's also https://arxiv.org/pdf/1908.09203.pdf#page=48

Why aren't bad actors using it in the wild? I think it's a combination of them being technically unsophisticated and conservative, propaganda not actually working nearly as well as people like to think it does, and lack of detection of competent actors (things like StyleGAN being used for fake FB profiles are detected through carelessness like leaving the faces exactly aligned).


By definition, if the tweets and articles are good, they are unnoticeable from real ones. That's exactly what makes them dangerous. If you could detect them, then they wouldn't be dangerous anymore.


You should see the too-dangerous-to-release AI model my team built. It actually governs nations in a manner indistinguishable from a human. There are nations currently under its thrall that you wouldn't even imagine. The world leaders of these nations merely speak what speeches it writes for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: