
Mitsuba – A physically based renderer - leohutson
https://www.mitsuba-renderer.org
======
corysama
Mitsuba was referenced in the recent Siggraph paper by DICE (battlefield4) as
the source of ground-truth for verifying their new volumetric rendering
system.

[http://www.frostbite.com/2015/08/physically-based-unified-
vo...](http://www.frostbite.com/2015/08/physically-based-unified-volumetric-
rendering-in-frostbite/)

(Starting at slide 31)

------
danielvf
This is fascinating. If I'm reading it right, this is a "research" renderer,
rather than a "production" one, since it uses the CPU, not the GPU, for
computation?

~~~
bhouston
A lot of production renderers use the CPU as well - most Dreamworks Animation
and Pixar films are still mostly CPU-based rendered. I've used Mitsuba and it
is pretty slow and incomplete compared to production renderers (Mitsuba's
scene definition format is just brutal compared to Arnold or V-Ray or PRMan),
but some of its results are not yet available in production renderers.

~~~
tomvbussel
It's a bit unfair to compare the speed of Mitsuba with the speed of Renderman.
Mitsuba only has a simple KD-Tree (although highly optimized), while RenderMan
most likely has a state of the art BVH, which utilizes SIMD instructions.
RenderMan also has a lot of sampling controls, while Mitsuba's path tracer has
always traces one light and one BSDF sample on each intersection. There are
multiple software engineers at Pixar that solely focus on optimizing
RenderMan, while most of Mitsuba is implemented by a single person, who is a
full-time researcher.

------
bl4ckm0r3
how does this compare to maxwell render?

