I wonder if the content filter will become mandatory.
The flip side is that we’re probably looking at the Apple model of AI software. The App Store was a repulsive idea when it was first introduced. Now it’s a fact of life.
Given the amount of false positives the content filter hit in my testing (it'll trigger on any profanity), it would not be good to require the filter, but giving a flag would be helpful.
It's still repulsive.
Given the amount of demand, we're trying to prioritize folks who want to build a concrete application or integrate with a product.
Please feel free to email me (email@example.com) and let me know what you'd like to build — I can't guarantee I'll be able to accelerate an invite, but at the very least I'll make sure we're tracking your use-case internally.
More specifically, for DOTA, they could track the progress and make sure there wasn't important regressions.. but this seems so general, how can they make sure it makes everyone' use-cases better?
It’s a fact of life. A different model will generate different outputs for the same prompts. And some of those outputs will be worse than they were.
But, if you use the same prompt with the same model, the output will always be exactly the same (content filters notwithstanding).
Isn't this only true if you set the temperature parameter in a way that renders the model deterministic?
If you ping the OpenAI API without any explicit model specification, it'll return davinci:2020-05-03, which has the version.