Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
codewithcheese
60 days ago
|
parent
|
context
|
favorite
| on:
Lessons after a Half-billion GPT Tokens
It can be faster and more effective to fallback to a smaller model (gpt3.5 or haiku), the weakness of the prompt will be more obvious on a smaller model and your iteration time will be faster
JeremyHerrman
60 days ago
[–]
great insight!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: