Hacker News new | past | comments | ask | show | jobs | submit login

> There will come a time in which hardware is sufficiently powerful that one no longer needs these tricks to create top graphics.

I deeply wish this were true. But it doesn't seem likely to me.

Games are real-time. And frame rate expectations are getting higher (60, 90, and even 120Hz) which means frame times are getting shorter (16ms, 11ms, 8ms). Rasterization is likely to always be faster than raytracing so the tradeoff will always be there.

Pixar movies look better and better ever year. The more compute they can throw at the problem the better pixels they can render. Their offline rendering only gets more complex and higher quality every year. It's so complex and expensive I'm genuinely afraid that small animation studios simply won't be able to compete soon.

Maybe raytracing will be "fast enough" that most gamedevs can use the default Unity/Unreal raytracer and call it a day. But imho the author is spot on that AAA will continue to deeply invest in highly complex, bespoke solutions to squeeze every last drop and provide competitive advantage.




FWIW i think it's possible. The question is whether it's within our lifetime.

There's an upper limit to what's useful - the real world. If our minds can't comprehend it, you've hit the upper limit of practical use. Once in game graphics become literally indistinguishable from real life then we'll start to plateau in terms of complexity of "tricks" and raw computing power will be able to catch up.


> Once in game graphics become literally indistinguishable from real life then we'll start to plateau in terms of complexity of "tricks" and raw computing power will be able to catch up.

Maybe. Alternatively games won't actually want to render photorealistic and will want to render with varying types of stylized graphics. Is that easier or harder? Probably a little bit of both.

We actually are at a point where we can real-time render photorealistic scenes... for certain types of objects. Primarily static environments. Photogrammetry is basically cheating, but it is highly effective! Mandalorian is famously filmed on a virtual set and it's cool as fuck.

Graphics is moving rapidly into physics. We might be soon be able to render photorealistic scene descriptions. However simulating the virtual world we still have a long, long ways to go. By simulation I mean everything from the environment (ocean, snow, etc) to characters. We most definitely can not synthesize arbitrary virtual humans.

Will we someday see a movie that _looks_ like a live action movie but is completely virtual? Oof. Maybe? But even if we could, would we want to? I'm not sure.


I'm afraid we won't see this limit getting hit in our lifetimes though. The gap is way too many orders of magnitude.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: