
AMD May Add Ray Tracing Feature in Navi 20 GPU - skolos
https://articles.marketrealist.com/2019/04/rumor-amd-may-add-ray-tracing-feature-in-navi-20-gpu/
======
tracker1
Of course, depending on how well the Navi core works, it could just be a
software hack, not dedicated cores. Given how poorly (relatively speaking) the
RTX does with ray tracing features enabled, I'm not sure it's really at much
of an advantage.

I do hope that the AMD cards are competitive and they've been a lot better
than NVidia at open drivers. Here's hoping they make an impact. Planning to
upgrade to a Zen 2 processor when they come out, and if AMD is competitive at
least on price, may switch or keep my 1080 and pass my old computer off with a
different gpu.

~~~
dragontamer
Based on when and where AMD's money came from (sales of the Zen CPU), it is
unlikely that Navi got the infusion of money needed before its design was
complete. AMD was still losing lots of money back in 2016 when Navi's design
would have been largely completed.

AMD began to make money in 2017 and 2018, but an infusion of cash that late in
the game won't have major effects on the GPU's architecture. Navi engineering-
samples would have had to have first taped out in 2018.

The next GPU in 2020 will be the first one made where the design started with
AMD's renewed R&D effort (funded by the success of its Zen CPU).

My overall expectations of Navi would be that its a slightly improved Radeon
VII architecturally (a few new assembly instructions here and there... but
nothing majorly different). It would still be a GCN architecture. EDIT: And
Navi is expected to be far smaller (and therefore slower) than the Radeon VII.

\---------

Of course, AMD may surprise me. But that's what I expect purely based on the
financials of the company.

~~~
xfalcox
The rumors say that a lot of engineers where moved from the two last GPU archs
(4xx, 5xx, Vega) so they could put most workforce behind Navi. I'm hopeful for
a good card that I can run natively on Linux.

~~~
metildaa
Its interesting to watch AMD at work, for one GPU generation they spent $300
million for everything, including design, marketing and the single production
run of silicon.

Meanwhile, Intel is spending ~$4 Billion on its iGPU architecture alone for
the next gen and has been spending heavily since ever to try and improve, with
much worse results.

------
jandrese
The article seems pretty speculative. AMD might introduce ray tracing
features, they might be faster than nVidia's 20x series cards. Hold off on
your purchase cowboy! The included graphs of nVidia gobbling up market share
over the past year aren't helping either.

~~~
beenBoutIT
Good advice. The idea that 'AMD might be really great soon' may never be true
for consumer GPUs. Although AMD does a great job in consoles like the PS4.

------
jammygit
Back on the team liquid forums, we used to call this an announcement of a
possible future announcement.

Will read the article when it is an actual one.

------
shmerl
I think ray tracing will be handled well enough by generic GPU hardware simply
using APIs like Vulkan. Nvidia probably oversold their dedicated components,
and AMD will try to go the more straightforward way to compete. Which is good,
if same thing can be achieved with portable cross-GPU approaches. GPU lock-in
is bad and only wastes developers' time.

~~~
dragontamer
A huge portion of ray-tracing is fulfilled by generic GPU hardware. Its the
bits that NVidia RTX accelerates that makes the difference.

The coarse bounding-box check, as well as the ray/triangle intersection check,
has DEDICATED hardware on NVidia RTX series (2060, 2070, 2080, and 2080 Ti).

This means that NVidia's RTX cards can determine if a ray is intersecting a
triangle, or a rectangle, with less power and at higher speeds than other
GPUs.

That's a very important operation in the overall Raytracing problem. Future
GPUs will need to implement this hardware-accelerated intersection check if
they wish to be as fast (or faster) than RTX cards.

Alternatively, maybe there's an algorithm to do it as fast in software, but
that seems unlikely to me.

~~~
Jasper_
Ray-triangle intersection is a very small part of the overall problem. Surface
shading is considerably expensive, and raytracing makes it worse as you don't
have any coherent materials or texture access like you did in rasterization.
NVIDIA's model also makes it difficult to sort rays by material.

~~~
dragontamer
> Ray-triangle intersection is a very small part of the overall problem.

Its a small part, but a very memory-latency part of the problem. GPUs
typically are very bad at memory-latency. Dedicated units would be able to
traverse the BVH tree far more efficiently than a typical GPU shader design.

The BVH Tree is your typical tree, pointer-chasing on a GPU is way slower than
on a CPU, especially with something like the BVH tree which is far larger than
can fit in a cache. A typical shader would just be stuck waiting for memory,
while RTX Cores can likely process a ton of them in parallel.

Hard to say for sure: NVidia didn't say how its RTX Cores work at the assembly
level. But it isn't hard to imagine dedicated units that are more efficient
than a typical design.

------
mhh__
To somone familiar with ray tracing algorithms: How much efficiency can be
gained in using different (invented in the future) algorithms n' tricks (e.g.
The kind of stuff that modern game engines use that didn't exist when CG was
in it's infancy) or is real time raytracing purely limited by the hardware (By
which I mean in general rather than current hardware)?

~~~
elihu
I think the biggest potential area for algorithm-driven improvement isn't so
much coming up with clever ways to trace rays faster, but rather coming up
with ways to generate better quality output within a given rays-per-second
budget.

Machine-learning-based denoising algorithms are a good example of that.

------
ars
I'm waiting for GPU's that can output a holographic interference pattern. It
looks like random noise, but makes a 3d image if you shine a laser through it.

I think ray tracing actually has advantage here as a precursor to the
information needed to calculate the interference pattern.

~~~
ajconway
Light-field displays may become a thing in a not-so-distant future.

[https://www.roadtovr.com/creal3d-light-field-display-ar-
vr-c...](https://www.roadtovr.com/creal3d-light-field-display-ar-vr-ces-2019/)

------
lanevorockz
Please do, Ray Tracing is not something that is really hard to implement and
has pretty good impact for gamedevs. Unifying how light is handled can make
lots of prebaked light data disappear, making iterations of models / levels
more consistent.

------
sprash
Nobody cares about the RTX feature. Gamers always prefer higher framerates
over improved indirect lighting which is in practice rather subtle.
Envrionment mapping was the king so far for a reason. Psycovisually accurate
reflections are the least important thing to look for.

AMD should ignore the RTX move by Nvidia completely and focus on reintroducing
their old VLIW architecture for shaders wich saved almost an order of
magnitude in power consumption.

~~~
_bxg1
Raytracing isn't just some expensive gimmick to make lighting look better.
It's the entire future of real-time graphics. Right now it's used sparingly,
but in a few years it's going to replace the great majority of rasterization
use-cases altogether. This will drastically simplify graphics programming and
allow for things we never thought possible. I dearly hope AMD gets on board
because competition is good, but the train won't wait for them.

~~~
pizza234
> in a few years it's going to replace the great majority of rasterization
> use-cases altogether

There's no ground to be (so) sure of that.

Given the current high cost, and more or less subtle effect, the prediction is
uncertain, although very desirable.

If in "the future" there will be a way to make RT cheap, definitely, it will
be _the_ way (I'd be very happy of that). If not, it will be another option,
as it doesn't make sense to shove everybody's throat a feature that sucks
30/50% of the framerate.

Additionally, as of now, we're talking about typically 30/50% of performance
loss just for a single RT effect - eg. Metro Exodus uses it only for Global
illumination, while Battlefield uses it only for reflections. What's going to
be the performance hit for games using multiple RT effects?

~~~
wtracy
Within our lifetimes, I expect we are going to see games than run _faster_
raytraced than rasterized.

Why? Raytracing doesn't pay a penalty for overdraw. Rasterization time
increases linearly with scene complexity, while raytracing time (with the
proper acceleration structures in place) increases logarithmically.

We're eventually going to see open world games that are only possible via
raytracing.

~~~
Rusky
Rasterization already makes heavy use of similar acceleration structures to
avoid overdraw.

