Hacker News new | past | comments | ask | show | jobs | submit login

I'm not a graphics expert, but reading the article and the long roundabout way they got back to "we're just working with the 23 bit mantissa because we've clamped down our range" makes me wonder why you couldn't just do fixed point math with the full 32 bit range instead?



Modern GPUs aren't optimized for it. You can indeed do this in software, but this is the path hardware acceleration took.


It’s actually quite common to use a 24bit fixed point format for the depth buffer, leaving 8 bits for the stencil buffer.

GPUs do a lot of fixed point to float conversions for the different texture and vertex formats since memory bandwidth is more expensive than compute.


Plenty of GPUs do have special (lossless) depth compression; that's what the 24-bit depth targets are about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: