Hacker News new | past | comments | ask | show | jobs | submit login

> Most LLMs are currently run at a loss as far as I know.

that's the entire question. Inference is cheap as all hell. I think Open AI is covering the cost of compute for $20/month per user because I can run llama on my laptop without the fans spinning up. of course, only OpenAI knows how much it costs to run, but that's what your argument hinges on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: