True. The potential of GPT-3 to cause internet mayhem was/is significant. I would argue that the mere act of announcing it was still a catalyst for an eventual GPT-3-like model being released. In revealing it, they established a target for what open source models could aim to achieve, and simultaneously got bad actors thinking about ways to abuse it.
It was a credible argument when GPT-3 was released. But now there are open models that are as capable as GPT-3 and that mayhem has not materialized, with the possible exception of GPT-4chan. They could release it now under a non-commercial license, if they cared to.
My experience with GPT-3 is that while it does perform better than those mini-GPT small models, the gap does not compensate for the fact that the small models are free/unrestricted and you can use them as much as you like.
As mentioned elsewhere in the thread there are some large models around the 50-200B band that compete directly with GPT-3, but I haven’t used these.
Two reasons. First, someone else will release something similar. Second, I didn’t see a related push from them to work with other in the industry to do something productive towards safety with the time they got by delaying availability of these kinds of models. So it felt disingenuous.
Several groups already have. Facebook's OPT-175B is available to basically anyone with a .edu address (models up to 66B are freely available) and Bloom-176B is 100% open:
I don’t see how GPT-3 is any more dangerous than Stable Diffusion, Photoshop, that fake news website the crazy person you’re friends with on Facebook really likes, or any of the number of other tools and services that can be used to generate or spread fake information.
I wouldn't really say Stable Diffusion marks images as AI-generated. There's a script in the Stable Diffusion repository that will do that, but it's not connected to the model itself in a meaningful way. I use Stable Diffusion a lot and I've never touched this script.
Trivial to remove, I give you that. But AFAIK, the original repository + most forks put the watermark automatically unless you've removed it on your own.
>Trivial to remove, I give you that. But AFAIK, the original repository + most forks put the watermark automatically unless you've removed it on your own.
almost all of the 'low-vram' variant forks either have an argument to turn off the watermark (it saves a bit of memory) or come with it disabled all together.
It would be pretty trivial to have an invisible watermark in GPT3 output-- though you don't really need one: just score text with gpt3 to find out if it was likely gpt3 generated or not.
I can understand not releasing GPT-3, even if I disagree with the decision.