Hacker News new | past | comments | ask | show | jobs | submit login

Most likely latency and cost reasons. A model that's 10x as big requires 10x the hardware to serve at the same latency. Since most generations are not too long, a smaller finetuned model should work well enough.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: