Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's going to depend on how small the model can be made, and how much you are using it.

If we assume that running locally meant running on a 500W consumer GPU, then the electricity cost to run this non-stop 8 hours a day for 20 days a month (i.e. "business hours") would be around $10-20.

This is about the same as OpenAI or Anthropics $20/mo plans, but for all day coding you would want their $100 or $200/mo plans, and even these will throttle you and/or require you to switch to metered pricing when you hit plan limits.



Neither $20 nor $200 plans cover any API costs.

At $0.17 per million tokens for the smallest gpt model that's still faster rand more powerful than anything you can run locally and cheaper in kilowatts per hour than it would cost you to run locally even if you could.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: