Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given the fact that OpenAI has constant resources (for any given small span of time) and varying demand (users and query type), it's not crazy to think they dynamically adjust to consume all available resources on their side.

Obviously the base model would be the same, but aren't there are +/- flavors they could overlay with extra compute? E.g. multi-pass, additional experts, etc.

The benefits to giving someone an occasional "magic" answer are too great not to.

Have there been any wide studies on same-prompt-different-times?



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: