
Ray Tracing Without Ray Tracing API - ingve
https://diaryofagraphicsprogrammer.blogspot.com/2018/09/ray-tracing-without-ray-tracing-api.html
======
Keyframe
_based on the last 20+ years of graphics development, it is easy to project
that when a large part of the ray tracing codebase is owned by a hardware
vendor, there will be various bugs introduced with each driver release._

also based on last 20+ years in graphics development, in real-time
applications you're using APIs with vendor-specific implementations for quite
some time now, and you're dependant on driver quality for quite some time now
as well. So, what's the author's point?

In offline rendering, people do write their own everything with code paths to
accelerate certain stuff where possible by vendor (embree, gpgpu...). You can
bet people will, already are in fact, implement support for RTX as well.
Implementing your own crap on compute/shade paths will NOT utilise the chip
core in RTX cards - which is meant for these new accelerations, the RTCore.

~~~
rossnordby
The likelihood of a feature being functional is usually based on two things:
its hidden complexity, and how frequently used it is. HW ray tracing suffers
on both of these counts (for now). It won't be truly common outside of major
engines for years, and it hides a lot.

Any developer who is in the awkward middle ground of building their own
technology without the resources of a major engine (both in terms of manpower
and in relevance to hardware vendors) should expect everything on the bleeding
edge to be partially or fully broken, often with no timely recourse. I'm not
faulting the vendors too much- the software x hardware situation is absurdly
complicated, and spending limited driver resources on a smaller company's use
case instead of, say, Unreal and Unity, doesn't make a lot of sense.

But if you're one of those smaller companies, you pretty much have to build
the fallback implementation first using common low level primitives that have
been tested a million times across the industry. If you don't, your product
simply won't work for many (and sometimes most) users. Even when their
hardware is supposed to support it.

And speaking from unfortunate experience, the up-front cost of building the
fallbacks is often less than trying to make the fancier things work
consistently, even though the fancier thing is ostensibly doing the work for
you.

~~~
mattnewport
I'm not convinced the part of ray tracing implemented in hardware on the new
NVidia cards (primarily bvh traversal and ray triangle intersection) is
significantly more complex than things that are already implemented in
hardware (rasterization, anisotropic filtering, tesselation, scheduling,
compression, etc.) and those tend to be pretty solid.

I also expect the hardware ray tracing to be heavily used for offline
rendering before it becomes mainstream for real time so there will be plenty
of usage from demanding customers to iron out any issues.

~~~
rossnordby
It's true that features of similar or greater complexity do tend to work
pretty well already, but those features tend to be the ones that are
practically fundamental and used everywhere. (edit: but I have hit bugs with
scheduling too- and maybe tessellation and anisotropic filtering, but those
cases were weird enough where it was hard to tease out the true cause.)

Ray tracing has a good shot at getting there (and probably in less than 5-10
years), and developers who can afford the suffering will hopefully pave the
way, but early adopters should be very wary.

To be clear, I'm not catastrophically pessimistic. I intend to do some work
with it, but I'm not under any illusions. I once encountered five distinct
blocking driver bugs in the span of a single week. I just assume everything is
broken until proven otherwise.

~~~
mattnewport
I think drivers tend to be more problematic than hardware, I'd actually expect
more issues with driver implemented support for ray tracing APIs running as
compute shaders than with the hardware accelerated support.

~~~
rossnordby
Yup, that's definitely true. It's going to be a fun transition period.

------
fwip
Using hardware-provided APIs might take more QA later, but I'm sure creating
your own ray tracing implementation from scratch costs way more developer and
QA time up front (assuming similar quality).

~~~
lostmsu
I had the same thought during the SIGGRAPH presentation. The API they are
providing is a step back, too high-level, like going from Vulkan back to
OpenGL.

It is nice to have for starters, but in the near future it will die quickly if
somebody comes up with an improved incompatible technique, that can be
implemented over existing low-level.

~~~
dogma1138
Who is they? NVIDIA provides multiple APIs which are accessible through
various APIs or natively the point of the hardware is to provide specific
interfaces and data types which make RT faster these are not limited to a
specific graphical API.

You can use it trough Optix which is proprietary low level C++ API based on
CUDA, you can use it with Vulkan or DX12 DXR the levels of abstraction are
dependent on what API/interface you’ll be using.

On top of that NVIDIA provides ready to use hybrid ray tracing libraries which
are built on top of Microsoft DXR currently with Vulkan coming within the next
2-3 months.

But you don’t have to use their hybrid path tracing or denoiser you can
develop it completely on your own.

~~~
lostmsu
I was talking specifically about DXR.

~~~
dogma1138
DXR doesn’t seem to be any more high level than Vulkan RT, if you want you can
go native but it isn’t clear to me what advantages you would even get
considering the already low level DX12 provides.

------
leowoo91
Just noting that ray tracing doesnt belong to anyone and article is taking
shot on real-time ray tracing hype recently deployed into daily hardware.

