
Blender Cycles Turbocharged: how we made rendering 10x faster - dhruvbhatia
https://cloud.blender.org/blog/cycles-turbocharged-how-we-made-rendering-10x-faster
======
thomastjeffery
> how we made -rendering- _motion blur_ 10x faster

------
Ericson2314
It just occurred to me that motion blur not some gimmick, but temporal rather
than spatial anti-aliasing. It's fundamental to accurate sampling.

~~~
theoh
Yes, both problems can be solved by the same sampling technique, e.g.
[https://en.wikipedia.org/wiki/Distributed_ray_tracing](https://en.wikipedia.org/wiki/Distributed_ray_tracing)

In theory, upping the resolution will also remove the need for spatial anti-
aliasing, though it's not efficient. Distributing rays in time feels a bit
different because what you're really trying to do is integrate over an
interval of time. So I'm not sure I agree that they are the same thing.

~~~
londons_explore
Remember that to remove aliasing (spatial and temporal), you want no
frequencies above the nyquist rate.

Typically, anti-aliasing uses an average of multiple samples across the width
of the pixel (MSAA), or across the timespan of the frame to eliminate this.

That is in fact a rectangular function in the time/spatial domain. Which is
wrong. A rectangular function _reduces_ rather than eliminates the frequency
components above the nyquist rate, and also attenuates frequencies near but
below the nyquist rate (leading to blur).

In fact, you want a rectangular _frequency domain_ function, which in the
spatial/time domain is a sinc function.

This isn't done in real cameras because it is technically too hard, but in 3D
rendering, it should be done, and will produce smoother animations.

I have never seen anyone do this, but results should be theoretically better.
I’d also like to see a freeze frame of a sinc time domain sampled motion blur
- it would probably look very weird, even though it looks good at the playback
rate.

------
santaclaus
> Rendering with motion blur is known to be a technical challenge

Ok, I'm not a rendering person, but didn't Ravi Ramamoorthi have a series of
papers that solved motion blur?

~~~
boulos
I assume you're thinking of Kevin Egan's paper from 2009 [1]? There have been
some follow ups since, but the basic idea is "you can filter the hell out of
it, kind of". Sadly, while these look okay, such filtering is still prone to
over blurring. The frames described actually focus on hair which is a perfect
example of what wouldn't work well in the filtering systems, _and_ requires
enough samples for anti-aliasing that motion blur comes "for free".

[1] [http://www.cs.columbia.edu/cg/mb/](http://www.cs.columbia.edu/cg/mb/)

------
boulos
They don't say how, but I assume like all of us they moved to a motion-blur
friendly time-based BVH. I'm surprised this only recently came up for Blender!

~~~
__s
> After a few days of investigation, Sergey improved the layout of hair
> bounding boxes for BVH structure. What does this mean? A more in-depth
> explanation is coming soon.

Sounds more that they optimized their BVH implementation

~~~
boulos
Sorry if I wasn't clear. Using a time-dependent BVH (where instead of two vec3
for the corners you store four, one pair for t=0 and one for t=1) is an
"optimization". Given the later sentence:

> After that, he applied the same optimization to triangles (for actual
> character geometry)

it suggested that the bug was just using a single bounding box that ignored
the motion (which is correct, but slow).

------
CyberDildonics
Every article I see where it is a blog about 'making X Y times faster' the
answer is always 'by doing something that reasonable people would assume we
had already tried' and this article is no different. This is just cycles
implementing a crude version of the cutting edge from 15 years ago.

