Afterwards OpenAI then added GPT3 chatbot guidelines disallowing basically anything like this.
We were in communication with them beforehand, but they decided later that any sort of free form chatbot was dangerous.
What they allow changes on a weekly basis, and is different for each customer. I don't understand how they expect companies to rely on them
OpenAI cloaks themselves in false "open" terminology to hide how proprietary and incredibly restrictive they've made their tech. That's a very cool demo; have you considered trying to make it run on GPT-J instead? It's an open source alternative you can run yourself or pay an independent api provider without supporting OpenAI.
It sucks that OpenAI has no competition right now. They have every right to control their technology however they like. But it's a shame that they're being so stifling with that right, killing really fun stuff like you demonstrated.
But that monopoly won't last, and I think it's more than likely that competition will crop up within the next year. There's definitely a lot of secret sauce to getting a 175B parameter model trained and working the way OpenAI has. The people working there are geniuses. But it can still be reproduced, and will. Once competition arrives I'm hoping we'll see these shackles disappear and see the price drop as well. Meanwhile the open source alternatives will get better. We already have open source 6B models. A 60B model shouldn't be far off, and is likely to give us 90% of GPT-3.
The notion of a toy like a chatbot being "dangerous" is just so ludicrous. The OpenAI folks take themselves way too seriously. Their technology is cool and scientifically interesting, but in the end it's nothing more than a clever parlor trick.
It's pretty easy to get GPT-3 to say things that are incredibly sexist and racist. I think OpenAI is more concerned about the bad press associated with that than AI-safety.
I think different kind of dangerous, not the SkyNet stuff. The first idea that popped into my mind is below. I know, it's dark but...
8 year old to AI: "my parents won't let me watch TV, what do I do?".
AI: "stab them, they'll be too busy to forbid you".
Then again the same thing can be said by a non-AI. My thinking is that you'd be talking to an actual average person. I'm not so sure that that is such a good thing.
You're right! And that's kinda my point. I can see other dangers of using GPT-3 that stem from assholes like me posting things on the Internet without thinking about literally everything they can be used for.
I wonder how many trolls are out there with the intent of poisoning AI training wells. When will they cause the first car crash by intentionally failing captchas?
That's a really interesting demo. What makes the responses so laggy? Does the model take that long to generate text? You can also experiment with things like repeating the user question or adding pauses like "hmm let's see" to make it less noticeable at least some of the time.
Too bad they asked you to pull it. What's the danger they are worried about? Annoying thing from their press releases is how seriously they take their GPT3 bot impact on humans. Despite all the hype, it's difficult to see the end of humanity by GPT3 bots any time soon. Honestly they need to rename themselves - can't see what's open about openai.
It's laggy since it needs to do speech to text, gpt3 text response, then text to speech. Not sure what adds the most latency actually.
They only allow gpt3 chatbots if the chatbot is designed to speak only about a specific subject, and literally never says anything bad/negative (and we have to keep logs to make sure this is the case). Which is insane.
Their reasoning to me was literally a 'what if' the chatbot "advised on who to vote for in the election". As if a chatbot in the context of a video game saying who to vote for was somehow dangerous
I understand the need to keep GPT3 private. There is a lot of possibility for deception using it. But they are so scared of their chatbot saying a bad thing and the PR around that they've removed the possibility of doing anything useful with it. They need to take context more into account - a clearly labeled chatbot in a video game is different than a Twitter bot
> But they are so scared of their chatbot saying a bad thing and the PR around that they've removed the possibility of doing anything useful with it.
It's not unreasonable to have checks-and-balances on AI content, and there should be.
However, in my testing of GPT-3's content filter when it was released (it could be improved now), it was very sensitive to the point that it had tons of false positives. Given that passing content filter checks is required for productionizing a GPT-3 app, it makes using the API too risky to use, and part of the reason I'm researching more with train-your-own GPT models.
Why should there be checks and balances on AI content? What most people label as "AI" today is literally just fancy statistics. Should there be checks and balances on the use of linear regression analysis and other statistical techniques? Where do we draw the line?
> Should there be checks and balances on the use of linear regression analysis and other statistical techniques?
That rhetorical question actually argues against your point: even in academic contexts, statistics can be used (intentionally or otherwise) to argue incorrect/misleading points, which is why reputable institutions have peer reviews/boards as a level of validation for papers.
The point I was making was more on general content moderation in response to user-generated content, which is required for every service that does so for legal reasons at minimum, as they're the ones who will get blamed if something goes wrong.
Ofcourse statistical techniques need checks and balances, hence peer reviewed academic papers, meta analysis, etc.
statistics is a major tool for science these days. science needs checks and balances otherwise it's a pretty idle effort. Without checks and balances, you could just imagine any theory and believe it's the truth because you want to.
But what if it wasn't clearly labeled? I did my MSc thesis on fake reviews and discussed the phenomena known as "covert marketing" a bit. e.g. a guy you're talking to in a bar at some point steers the conversation to the excellent beer he is drinking and heavily recommends it to you. Good enough actors will be very convincing. "Influencers" are a somewhat more ethical alternative that takes advantage of humans' lemming-like nature.
I mean, quite a lot of people truly believe Hilary Clinton is the mastermind behind a DNC run pedophile ring. Yes, she is a problem, but that theory is completely schizophrenic. A NPC masquerading as a real person who spouts positive talking points about Tucker Carlson's respect for Hungary is quite reasonable compared to that and it will suck some people in.
So all it takes is some right wing developers for a not-entirely-just-a-game like Second Life or Minecraft to introduce a bug that allows certain instances of NPC to be unlabeled... or a mod to a game that drives a NPC... and an equivalent to GPT-3 funded by the Kochs or the Mercers...
Very hypothetical, very hand waving. But it is possible. So I can see the PR and legal departments flat out stopping this idea.
> but they decided later that any sort of free form chatbot was dangerous.
Seems like OpenAI saw this video differently. But then again, now OpenAI wants to police how to use GPT-3 and reject or approve what is acceptable for others using their service; since they can change their guidelines at any time.
They need a sense of humour, rather than policing projects like this.
> What they allow changes on a weekly basis, and is different for each customer.
Exactly. I don't know what to say to entire building their entire business on top of OpenAI, since they can just revoke access instantly and simply they may not like what you are doing and will point to the 'guidelines'
> I don't understand how they expect companies to rely on them
Won't be surprised to see Rockstar Games using a tweaked, self-hosted or private version for their future games for this use case, Since OpenAI knows they can get a significant amount of money from large customers like them.
If GTA 6 doesn't have chatbots, I will be very disappointed. This has widened the possible level of immersion in action-adventure games and RPGs immensely.
> Afterwards OpenAI then added GPT3 chatbot guidelines disallowing basically anything like this. We were in communication with them beforehand, but they decided later that any sort of free form chatbot was dangerous.
Was this announced anywhere? We applied to deploy an application in this space, and they refused without providing any context, so I'd be really interested if they published details about restrictions in this space somewhere.
I work in this domain, and you can make these things say anything with a little probing, even stuff like "Hitler was right to kill all the Jews, I wish he was still alive today."
They likely don't want to have "OpenAI GPT-3" and such stuff associated to one another in such demos, would be really bad for their appearence.
Afterwards OpenAI then added GPT3 chatbot guidelines disallowing basically anything like this. We were in communication with them beforehand, but they decided later that any sort of free form chatbot was dangerous.
What they allow changes on a weekly basis, and is different for each customer. I don't understand how they expect companies to rely on them