Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] OpenAI API (openai.com)
53 points by bookofjoe 49 days ago | hide | past | favorite | 24 comments




One disadvantage of an API is that OpenAI controls what the model is allowed to output. They’re already working on a content filter in response to the latest wave of social media criticisms. (The head of Facebook AI publicly accused OpenAI of deploying an unsafe model, and said that it shouldn’t be so easy to get it to generate racial stereotypes. https://twitter.com/sama/status/1285985962250534912?s=21)

I wonder if the content filter will become mandatory.

The flip side is that we’re probably looking at the Apple model of AI software. The App Store was a repulsive idea when it was first introduced. Now it’s a fact of life.


Currently, the content filter on the Web only outputs a warning after the text is completed its generation. (and it's apparently returned as a field with the programmatic API)

Given the amount of false positives the content filter hit in my testing (it'll trigger on any profanity), it would not be good to require the filter, but giving a flag would be helpful.


> The App Store was a repulsive idea when it was first introduced.

It's still repulsive.


So there's going to be a model that controls the output of other models? I guess it really is turtles all the way down...


It's still a repulsive idea and I believe we should be actively telling our friends and coworkers that it's bad capitalism to use it.


I don't see anything wrong with the Apple app store - they just need to be forced to bring the Apple percentage down (they clearly won't do it on their own).


Famous last words before your app is banned.


I was really excited about this and applied around a month ago but still haven't gotten any indication of when I can get access. I was curious if others had much longer use case descriptions or were famous or something else to get early access.


Been waiting for few weeks as well... maybe @gdb will read this thread and can expedite the process for HNers.. :P


waves

Given the amount of demand, we're trying to prioritize folks who want to build a concrete application or integrate with a product.

Please feel free to email me (gdb@openai.com) and let me know what you'd like to build — I can't guarantee I'll be able to accelerate an invite, but at the very least I'll make sure we're tracking your use-case internally.


This. :-)


As far as I can see, this is the same announcement from 6 weeks ago - there's nothing new here, it's still in beta and you still have to go on the waitlist.


Question from a non-ML expert: How can I be sure that my code working with one version will still be working after they update/re-train the models?

More specifically, for DOTA, they could track the progress and make sure there wasn't important regressions.. but this seems so general, how can they make sure it makes everyone' use-cases better?


You can’t! :)

It’s a fact of life. A different model will generate different outputs for the same prompts. And some of those outputs will be worse than they were.

But, if you use the same prompt with the same model, the output will always be exactly the same (content filters notwithstanding).


> But, if you use the same prompt with the same model, the output will always be exactly the same (content filters notwithstanding).

Isn't this only true if you set the temperature parameter in a way that renders the model deterministic?


The temperature parameter controls a random number generator, which itself is controlled by a seed. I assume (or hope) you can specify the seed via the API.


Even the same model generates different outputs for the same prompts. GPT has a temperature parameter that allows you to restrict the variety of text generated but even then the output differs for the same model and prompt


The same way ML models in production at large companies behave: model versioning.

If you ping the OpenAI API without any explicit model specification, it'll return davinci:2020-05-03, which has the version.


I know exactly the project I would like to work on: "english to 3d graphics shader". The problem is, it can be difficult to express what the shader does in the first place using plain human english. ShaderToy currently lists approx 40K+. And their human annotated descriptions can be quite creative ;)


I'd love to see this and I'd work with you on this. There are probably other niches of procedural generation that GPT-3 might excel at.


Hit me up. Email in profile. Starting right away in building an encyclopedic library of GLSL shader code for training ;)


@ the other comments, email greg brockman directly if you have a good idea and he'll speed up your access. I applied only a few weeks ago and I'm not famous (no twitter account), I just mentioned i'm a MS student and I had an idea of what I wanted to do


I really want access to the API but I have no immediate use case and I'm not "AI community famous". I have vague ideas about prompting it to create 3D models (I work in games\graphics), but I have no idea if that will actually work given the data set.




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: