
PowerVR Wizard GPU does real-time ray tracing at 10x lower power than Nvidia  GTX - alexvoica
http://blog.imgtec.com/powervr-developers/real-time-ray-tracing-on-powervr-gr6500-ces-2016
======
berkut
_sigh_

Why does pretty much all the marketing and examples of "groundbreaking"
new/faster tech for raytracing/pathtracing always just show outside IBL-lit
scenes?

These are _easy_ to resolve (generally within 32 progressions with MIS for
diffuse surfaces).

Similarly, they rarely show anything other than perfectly specular glass, or
pretty rough microfacet surfaces, which with indirect caustics off (which they
obviously have in the above examples), again is pretty trivial to resolve.

Indoor scenes with many lights (lots of them sometimes occluded) with lots of
indirect illumination is going to be a _lot_ slower, and for games I would
have thought this is important: I don't see how this latter scenario is
workable in real time, without lots of cheats.

~~~
dogma1138
If you gonna cheat just give up on RT - this is what precomputed shadow maps,
light maps, speculate maps etc are for together with various cheats to clip
shadows when they overlap or already at max "darkness" same thing with lights
when you have multiple lights sources you usually approximate by stacking the
combine illumination values for light sources of the same direction.

I really don't understand why are they chasing RT.

Also would be nice to know how well they can actually compute shaders, do
tessellation and many other things because this is one heck of a fluff piece
and I can remember at least 1 of these past 20 years that proclaimed that real
time RT in hardware is here.

~~~
namuol
> I really don't understand why are they chasing RT.

1\. The highly-parallelizable nature of RT is ideal for many-core architecture
we're heading toward. Note that the PowerVR hardware used in the demo has 4
cores, needs no fans, and uses 10x less power than a traditional
rasterization-optimized GPU.

2\. The simplicity of implementing physically-based effects leads to a much
easier time for artists.

~~~
tizzdogg
I would argue that physically-based raytracers actually make things more
difficult for artists, because they tend to be less flexible. Being physically
correct is great if what you get out of the renderer is exactly what you want,
but as soon as you need to cheat anything to achieve a particular look, it's
much easier to not be bound by physical correctness.

~~~
berkut
Maybe, depends on whether you're doing CG animation (Pixar, Dreamworks style)
or more realistic stuff - VFX, photoreal stuff.

If the former, then maybe a bit.

But the beauty of physically-based is that in many cases, you can take a model
that's been textured and lookdeved and it will work nicely in many different
lighting setups. Before Physically-based, you practically had to re-do stuff
completely when you changed the lighting to get the look you wanted.

However, Physically-based still isn't a complete win - you can't looked
something close up and expect it to look good in the distance - it just won't.
You need to be aware of the LODs and create different asset variations
appropriately for different distances.

~~~
TheOtherHobbes
It would be _really nice_ to be able to do things like this in RT:

[http://www.zeitguised.com/geistxyz#geistxyz01](http://www.zeitguised.com/geistxyz#geistxyz01)
[https://vimeo.com/150824660](https://vimeo.com/150824660)

------
_Codemonkeyism
How times change, roughly 20 years ago I did real time ray tracing of some
spheres (can't get simpler) on a DEC Alpha cluster and was amazed (low
resolution).

------
winterismute
If you want to know a bit more about the details of the platform, these 2
presentations from GDC 2014 are still some valid material:
[http://www.gdcvault.com/play/1020741/New-Techniques-Made-
Pos...](http://www.gdcvault.com/play/1020741/New-Techniques-Made-Possible-by)

[http://www.gdcvault.com/play/1020688/Practical-Techniques-
fo...](http://www.gdcvault.com/play/1020688/Practical-Techniques-for-Ray-
Tracing)

------
Moral_
Imgtec's linux drivers are also pure garbage in the security sense. I would
never install one of their drivers, it would essentially be the same as
installing a backdoor.

~~~
FnuGk
nvidia or powerVR?

~~~
robbies
Powers at is a subdivision of ImgTec, so...your answer is PowerVR

------
transfire
Photorealism in video games is around the corner, brothers.

~~~
Millennium
Photorealism in video games has arrived. It's arrived at least 6-7 times,
depending on how you count console generations. Even 16-bit consoles were
called photorealistic in their day, and then their successors were, and then
THEIR successors were, and so on. It's almost as though we're asymptotically
approaching a carefully-chosen marketing-driven goal that can never actually
be reached, but towards which we can continually make (ever-tinier) steps.

~~~
ZenoArrow
What you say may be true, but there's a difference between 16-bit photorealism
and the photorealism possible today, and the difference is that it's possible
to find rendered images that are hard to distinguish from real life today. In
the 16-bit era? Not so much.

To give some examples, take a look at these images created in Blender:

[http://www.blenderguru.com/articles/24-photorealistic-
blende...](http://www.blenderguru.com/articles/24-photorealistic-blender-
renders/)

~~~
potatolicious
Most of those are pretty evidently distinguishable from real life...

The issue here - and the point being made - is that we are _already_ at the
stage where many CG images can fool even close observers, and we've been here
for some time.

We've been passing around these "CGI OR REAL?!!!!" images for a few years at
least, and when they are first created people really can't tell the
difference, but yet somehow when people get used to the tech and revisit these
images a few years down the line they look _obviously_ synthesized.

Ditto older movies like Jurassic Park, whose cutting edge CGI of the time were
convincing even when viewing frame-by-frame, but to savvy eyes appears
downright obvious today.

So clearly "photoreal" is something we haven't reached yet. IMO the indicator
that we've reached "true photoreal" is when an image rendered 5 years ago
still holds up to fresh eyes.

~~~
ZenoArrow
One of those images was created in 2007, and still holds up as photorealistic
for me (though admittedly I'm not viewing it on a large monitor).

I'd be interested to hear your take on the 'giveaways' on each of those
images. I can see some on some of them, but not on all of them.

------
toast42
Assuming the claims are correct (5x performance, 10x less power), what's
stopping this from going mainstream?

~~~
jsheard
The results aren't good enough to compete in games. Gamers don't care if the
reflections and shadows are more authentic if the shading as a whole regresses
by several generations.

Just compare the subjective quality of their "accurate rendering" to what that
Nvidia GPU they showed can do with cheap approximations:

[https://www.youtube.com/watch?v=BAAPTiuFdwU](https://www.youtube.com/watch?v=BAAPTiuFdwU)
[https://www.youtube.com/watch?v=slc--
V2pi5c](https://www.youtube.com/watch?v=slc--V2pi5c)

[https://www.youtube.com/watch?v=vfZD22zMnUY](https://www.youtube.com/watch?v=vfZD22zMnUY)
[https://www.youtube.com/watch?v=UwEuSxAEXPA](https://www.youtube.com/watch?v=UwEuSxAEXPA)

~~~
givinguflac
The results aren't good enough to compete in games that require an expensive
dedicated graphics card that costs more than the device this gpu will go in.
FTFY.

Compared to any current phone/tablet GPU, it's going to be a big step up.

~~~
jsheard
Going by their claim of "10x lower power than 980ti" this PowerVR card is
burning through 25 watts, so it being a visual step up from Snapdragon and
Apple SoCs which draw 3-4 watts for the CPU and GPU _combined_ isn't
particularly impressive.

~~~
ssmoot
From the article:

> The PowerVR GR6500 is a mobile GPU. Its die size, GFLOPS performance,
> bandwidth requirements and power consumption mean that it is comparable to
> the GPUs already available in smart phones today. But compared with a
> console GPU or looking towards the smart phones and handheld devices of the
> future, we see a roadmap that scales in capabilities and performance well
> beyond the GR6500’s specifications. The PowerVR Ray Tracing technology is
> fundamentally scalable and the efficiency actually increases as we move to
> more and more powerful cores.

~~~
dharma1
28nm die size is previous gen mobile GPUs. But would be great to see how this
performs for desktop use when they shift to smaller process nodes and scale it
to match power consumption and cost with say, GTX 980

