Hacker News new | past | comments | ask | show | jobs | submit login
Perspective-Correct Interpolation (andrewkchan.dev)
54 points by nicopappl 10 months ago | hide | past | favorite | 12 comments



This is a fascinating topic, especially when considering how this was achieved back in the days. We now take it for granted, but this was not a simple feature to have in your game in 199x ( x<6 ;) ).

For anyone interested: I have a detailed write up on the topic here in the context of 1990s renderers https://github.com/sylefeb/tinygpus/tree/main?tab=readme-ov-... ; a video discussing texture mapping with a hardware twist https://youtu.be/2ZAIIDXoBis?si=MvQXH2ltqWmvFMdt&t=1072 ; and have a shadertoy to compare perspective correct texture mapping on/off https://www.shadertoy.com/view/ftKSzR


Reminded me of Quake's trick[1] of calculating the perspective correction every 16 pixels, as you then effectively got it for free on a Pentium processor.

[1]: https://www.bluesnews.com/abrash/chap68.shtml


I remembered the same. Perspective correction needs a division operation for each pixel and while drawing a scanline, calculating it every 16 pixels and linearly interpolating between them make a huge speed difference.


I remember that Descent, released earlier, had used the same trick.


It turns out we all could count cycles.

Seriously, this keeps getting overhyped as some gigantic insight when it was really just a consequence of the Pentium having been released in 1993. And with the Pentium, you got both reliable FPU availability (none of the 486SX pain), and the cycle count for FDIV dropped by almost 50% (73->39 IIRC)

Everybody doing 3d gfx knew you needed a perspective divide and was looking at ways to do that cheaply. Interpolation + a long-latency instruction that doesn't block the main pipelines is a fairly straightforward answer.


I seem to recall reading about it in one of the demo code articles floating about at the time, similar to this one[1]. But Quake is the one that stuck with me for some reason, and it's easy to link to.

[1]: https://www.lysator.liu.se/~mikaelk/doc/perspectivetexture/


Abrash was a prominent and prolific writer at the time (in addition to be an excellent coder). That might have helped enough to make that example stick for folks.

But also worth noting that that article is dated one year after quake, two years after descent.

Also, lysator... there's a name I haven't heard in a while! Thanks for reminding me.


Descent used a different trick, of drawing even-z lines (one division per line), which meant drawing diagonal lines.



I think computing 2D barycentric coordinates by inverting a 3x3 matrix whose last column is [1, 1, 1] is a bit wasteful.

The proper solution is this: https://gamedev.stackexchange.com/a/63203


[author] Thanks for reposting this! It was a small but cool learning that all perspective correction needs is dividing screen-space barycentrics by vertex depths and normalizing. I wanted to share my understanding of the math.

3D graphics is a rich and old field with lots of tricks like this. It's cool to see faster algorithms and alternative explanations in the comments!


The funny thing about this was that by the time you'd finished a 10 hour session of Tomb Raider you got used to the effect, so reality looked weirdly distorted for a while after.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: