Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not really, Figure 1(a) of the paper says that the 17.7% are relative to a total of 30k GPUs (i.e. 5310 GPUs for handling those 1.35% of requests) and the reduction is measured in a smaller beta deployment with only 47 different models (vs. the 733 "cold" models overall.) Naïve extrapolation by model count suggests they would need 3321 GPUs to serve all cold models, a 37.5% reduction to before. (Or 6.6% reduction of the full 30k-GPU cluster.)




Really:

"A paper presented at SOSP 2025 details how token-level scheduling helped one GPU serve multiple LLMs, reducing demand from 1,192 to 213 H20s."

Which, if you scale it, matches the GPs statement.


From the SCMP article you might get the impression that the various figures all refer to the same GPU cluster, but in the paper itself it's very clear that this is not the case, i.e. the 213 GPUs in the smaller cluster are not serving 1.35% of the requests in the larger cluster. Then if you want to scale it, you have a choice of different numbers you could scale, and each would get different results. Since they're constrained by the limited number of different models a single GPU can serve, I think scaling by the number of models is the most realistic option.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: