Hacker News new | past | comments | ask | show | jobs | submit login

The trend is clear: Rendering engines are becoming compilers.

At the dawn of real-time rendering everything was both bespoke and fixed-function. You would call a different optimized codepath to draw a floor, a wall, or a ceiling. If the platform was amenable to it you might use self-modifying code to optimize the time/space tradeoff. The datasets were small enough that a simple array was always the default "right" choice until you were running out of memory.

Gradually the demands generalized and became more algorithmic than low-level in nature. DOOM's best-known optimization was in using BSP for visibility.

Now - while there's still a great demand for smarter algorithms - most of the pipeline complexity comes in this kind of generalized dependency analysis stuff to maximize use of resources across a broad, complex, configurable pipeline without hand-tuning everything, which is a familiar compiler problem to have.




One of the best talks at Siggraph last year was on Bungie's particle system Destiny 2 (slides are here: http://advances.realtimerendering.com/s2017/index.html). The talk was an elegant breakdown of how they (1) created a CPU interpreter for a domain specific language describing particle parameters, (2) ported it to the GPU (even running an interpreter on the GPU for parts of the production game!), and (3) how they converted it into a compiler for the obvious performance wins.

Apparently running bytecode interpreters on the GPU is not uncommon: https://dolphin-emu.org/blog/2017/07/30/ubershaders/


Procedural generation and compilation of shaders has been common for a while in some engines and toolchains, too. Though it's often done online before shipping the game.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: