
A look at the PowerVR graphics architecture: Deferred rendering - luu
http://blog.imgtec.com/powervr/the-dr-in-tbdr-deferred-rendering-in-rogue
======
klodolph
The article linked in the first paragraph provides some good context:
[http://blog.imgtec.com/powervr/a-look-at-the-powervr-
graphic...](http://blog.imgtec.com/powervr/a-look-at-the-powervr-graphics-
architecture-tile-based-rendering)

Graphics developers need to know this stuff because certain operations can be
very, very expensive on tile-based deferred renderers. Things like reading
from the framebuffer, for example. On an immediate mode renderer, the GPU just
has to queue up a DMA. On a tile-based deferred renderer, the previous
commands have to be flushed first. And these differences are part of the
motivation for new APIs like Metal and Vulkan.

~~~
bronxbomber92
Reading from the frame buffer is, in general, extremely cheap on TBDR
architectures, _not_ expensive. The frame buffer for the current render is
stored in tile memory which is fast to access. Accessing the frame buffer with
an immediate mode renderer will involve a load from main/global memory.

~~~
klodolph
I think I was a bit ambiguous. I was imagining something like glReadPixels()
with a PBO target, which won't work unless all of the tiles have been
rendered, but is just a DMA or pack operation in an immediate mode renderer.

------
amagumori
Edited out a minor freakout i had thinking powervr was claiming to have
invented tbdr when its been in the literature for years. Well guess what,
fuck, i'm totally wrong, powervr did invent tiled rendering. In 1996. That's
what i get for being 2 when the paper came out.

------
Coincoin
An unfortunate consequence of that is you can't discard or alpha test pixels
without major performance loss. This means if you want detailed stuff you have
to render everything transparent and then it's CPU z-sorting time.

------
malkia
I remember back in the days a very popular dreamcast demo was showing deck of
very big cards jumping on the screen without slowdown - mainly because of the
tile rendering the dreamcast had (PowerVR).

The only downside was transparency (and I guess still is), so people used to
do checkerboard-transparency (e.g. imagine a chess/checkerboard - the white
pixels stay, black are out) - then it was fun to combine this other
transparency out there. Not sure how well it worked, but people tried it out.

------
meanduck
Is this real hardware architecture or is it software api on top of general
simd architecture like GCNs ?

------
virtualritz
A lot of this just sounds like REYES implemented on hardware. Essentially what
Pixar did with the Pixar Image Computer in the early/mid 80's.

[https://en.wikipedia.org/wiki/Reyes_rendering](https://en.wikipedia.org/wiki/Reyes_rendering)

~~~
berkut
The tiling concept is similar, but the overall algorithm is a bit different,
in that with REYES micropolygons are shaded first before being visibility
tested: that's why REYES is so amazing at doing displacement.

It does means however that REYES can suffer from huge overdrawing issues when
objects are small on screen however (unless shading rate is set
appropriately).

You can still cull objects that aren't on screen ahead of time though, so you
make a saving here.

Pretty much all rasterisers can suffer from this issue to some degree, which
is why using raytracing for initial visibility testing scales far better
(assuming acceleration structures are used) with bigger scenes (hundreds of
millions of triangles). And it's one of the reasons why almost all offline
renders used for VFX have moved to raytracing.

