Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

... Fine tuning for OpenAI's GPT LLM's has been available for years now, at least since the GPT-3 private beta if not earlier (and obviously you could train the open models yourself)


That's true, but it was expensive and until recently you could only tune older versions of GPT-3 lacking both instruction tuning and the code pre-training of the Codex models (from which GPT-3.5 is thought to descend). You had to want tuning so badly you were willing to 6x the token cost and go back in time 2 years.


But not for gpt3.5 family.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: