Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A primer on the OpenAI API – Part 1 (scottlogic.com)
39 points by ColinEberhardt on Sept 1, 2021 | hide | past | favorite | 3 comments


Good piece. Been getting familiar with Codex, so much of the success depends on quality of the input (prompt)... No surprise...garbage in, garbage out holds true here as you might expect...although it gets more interesting when you start pulling the temperature lever which controls the randomness/creativity of the model as this can actually figure out the intent from a poor input but the results are then much harder to reproduce. Fascinating stuff.


While garbage in, garbage out may seem like a bad policy to the user, to the AI system, it means that it can have a closed feedback loop, where the final code (the solution) can be linked to the initial input, regardless if the input was garbage or not.

I would say that anything that can be stated as a large-scale supervised reinforced learning problem is a gold mine -- if the output of course, has value and supervision is free.

Tesla self-driving and Comma.ai, from an eagle's eye view, exploit the same concept.


Oh, I read the title wrong. I was expecting something about OpenAPI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: