Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How can startups running ML models reduce their compute costs?
1 point by rococode on Feb 22, 2020 | hide | past | favorite | 2 comments
Machine learning stuff seems generally expensive to run in production, especially at scale (how many GPU machines do you need up and running to serve 100 concurrent users without major delays?). It seems especially hard for B2C/freemium-style companies to offset those costs.

If, for example, you're a freemium web app where each user request takes 100ms of inference time, you may have trouble turning a profit. Even something as basic as a translation tool (à la Google Translate) seems tricky for a startup to run at a profit. Larger companies presumably have the flexibility to treat some of these things as loss leaders, whereas a startup with ML as its core product likely needs to be more conscious about cost.

What are some ways that you've seen companies cut down on compute costs for machine learning services?




Batch as much as possible. For companies that absolutely must have actual ML the temptation to make everything real-time/on-demand is high but sometimes (often) unnecessary. Take a serious, hard look at where that line is. It’s like companies that need simple rules or analytics reaching for ML: an unnecessary expense in terms of development time and hard cash.


If you can batch things up and don’t need real-time returns using spot instances is a great way to lower costs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: