Hacker News new | comments | show | ask | jobs | submit login

Missing 1 important point: ML workflows are super chunky. Some days we want to train 10 models in parallel, each on a server with 8 or 16 GPUs. Most days we're building data sets or evaluating work, and need zero.

When it comes to inference, sometime you wanna ramp up thousands of boxes for a backfill, sometimes you need a few to keep up with streaming load.

Trying to do either of these on in-house hardware would require buying way too much hardware which would sit idle most of the time, or seriously hamper our workflow/productivity.




on the other hand, this comparison accounts for the full cost of the rig, while a realistic comparison should consider the marginal costs. Most of us need a pc anyways, and if you're a gamer the marginal cost is pretty close to zero.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: