Hacker News new | past | comments | ask | show | jobs | submit login

> If you're working on ML (as opposed to deploying someone else's ML) then almost all of your workload is training, not inference.

Wouldn't that depend on the size of your customer base? Or at least, requests per second?




With more customers usually the revenue and profit grow, then the team becomes larger, wants to perform more experiments, spends more on training and so on. Inference is just so computationally cheap compared to training.

That's what I've seen in my experience, but I concur that there might be cases where the ML is a more-or-less solved problem for a very large customer base where inference is more. I've rarely seen it happen, but other people are sharing scenarios where it happens frequently. So I guess it massively depends on the domain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: