Being the top story on Hacker News tonight was completely unexpected, but it's a good surprise and definitely appreciated publicity!
appleseed has been in active development since june of 2009. It predates a number of other open source renderers by quite a few years, including Cycles (another fantastic project!).
I'm a production rendering engineer (e-on software, mental images, NVIDIA, Jupiter Jazz...). I've started this project out of personal interest for rendering, and as a platform for learning (there's always tons of new stuff to learn), research and experiments. All other team members are professionals currently working in the industry.
appleseed is one of the few open source renderers designed for production rendering and targeted at animation and VFX. In addition to fully programmable shading via OpenShadingLanguage, strong support for motion blur and many other specific features, it supports accurate spectral rendering, which is quite a unique combination.
We still have a ton of work ahead to make it a truly competitive renderer but we're making regular progress: we're improving the core renderer every day, and lately we've been putting massive efforts in improving our integration with DCC apps and in achieving a comfortable workflow for artists. Loads left to do !
Let me finally add that I'm blessed to work with such a great team. Top quality work, persistently. We're a small but welcoming community, and contributions are most welcome!
Feel free to ask me anything!
You express a level of care, consideration, and diplomacy that is sadly lacking just about everywhere.
The product itself looks amazing; I love seeing this high bar for open source.
We certainly do put a lot of care and efforts into producing a high quality software product that is not only open source with a liberal license (MIT) but which is also developed in the open (we're happy to invite anyone to our Slack team at https://appleseedhq.slack.com where all development discussions and decisions take place).
I have ACM library account so if they are ACM or Siggraph papers that is fine.
And thank you for making your project open source.
It's not a research paper, but for learning purposes there's nothing better than the Physically-Based Rendering (PBRT) book by Matt Pharr, Greg Humphreys and Wenzel Jakob, already mentioned by others in this thread: http://www.pbrt.org/
One of the foundational paper is definitely The Rendering Equation, by James Kajiya:
Another highly influencial research paper is certainly Eric Veach's PhD thesis, Robust Monte Carlo Methods For Light Transport Simulation (a.k.a. "The Bible"): http://graphics.stanford.edu/papers/veach_thesis/thesis.pdf
A few other paper stand out:
Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs, by Eric Heitz: http://jcgt.org/published/0003/02/03/. A somewhat difficult read, but an important paper.
Microfacet Models for Refraction through Rough Surfaces:
Physically-Based Shading at Disney:
A Practical Model for Subsurface Light Transport, by Henrik Wann Jensen:
Light Transport Simulation with Vertex Connection and Merging:
Finally, for learning, there's a nice lecture by John Carmack at QuakeCon:
Seems like it's in around the same place, although the plugins are getting better and the renderer is starting to support complex scene features (not quite there yet).
Overall though there are 100's of ray-tracers and scene renderers (seemingly all in C++), so it's not clear if it has any compelling advantages.
Just have a look at the state of the art math libraries in rust and compare it to something like Eigen or Cgal. The C++ code is way more flexible and expressive than the rust code. If you don't believe me, check how the rust libraries handle matrix implementations. Often you will find specialized implementations of 1x1 to 4x4 matrices but no generic n-dimensional matrix code.
I'm currently looking for a rust guide that shows me some programming patterns.
- How to best implement an observer pattern
- Best practices for vector code
- Best practices for tree implementations and how to implement a lambda on top of it.
I'm interested in small snippets so that I can get some initial productive code and progress from there.
Most commercial software packages (e.g. Maya) expose their APIs via C++, though, which makes the use of other languages trickier.
Just this year they've added code to support alSurface, which I consider to be a significant (and unexpected) feature improvement.
Their news page (to me) shows significant improvements since late 2015 as well, and it looks like the 1.7.x beta is just around the corner.
Maybe it's not possible to make a similar kind of pipeline/platform-type thing for renderers, and they really do need all that material/shader garbage, but I always have been kind of disappointed with the state of the art...
If you're disappointed with the state of the art, maybe you don't really understand the state nor the art?
Obviously money isn't everything, so maybe there's a better criteria for judging projects, but at least on the "lasting impact on the world" front appleseed seems to fall short...
Tensorflow's management is not relevant to its popularity. It's been well demonstrated that companies are tripping all over themselves to use anything put out by Google and/or Facebook, regardless of its applicability to the company's actual problem space or the quality of the product as compared to competitors. There are a lot of people out there just itching to find any excuse to blow millions of dollars deploying any open-source project touted by Google or Facebook, often blissfully unaware that these projects are born out of necessity, not amusement, and that Google/FB would've happily been using a mature, out of the box solution if it accommodated their needs.
The point is saying "Look at an independent guy's project. It's not even as active as some of Google's projects, and they're just one of the biggest companies in the world! Hah!" is really, really unfair, and doesn't say anything about anything.
>Obviously money isn't everything, so maybe there's a better criteria for judging projects, but at least on the "lasting impact on the world" front appleseed seems to fall short...
That the project does not now appear poised for world domination doesn't mean it's not significant, influential, or important, or that it won't eventually go on to have a larger-than-expected impact. This is particularly true if it explores an interesting or rarely-used paradigm, or is otherwise noteworthy for its technical excellence.
KDE's Konqueror began as a custom web browser for their desktop environment and easily could've been classified "yet another open-source timesink". But its engine, KHTML, became the foundation for WebKit. If someone is interested in making something interesting, there's no reason to begrudge it.
Angular being the quintessential example.
Sure, but I was counting commits and contributors, which I don't think can be attributed solely to shallow business decisions.
> Look at an independent guy's project. It's not even as active as some of Google's projects, and they're just one of the biggest companies in the world! Hah!" is really, really unfair, and doesn't say anything about anything.
They're not that big, only 60,000 people or so AFAICT, compared to e.g. Wal-mart's 2.1 million. And then it's only ~50 people who worked on Tensorflow directly: https://research.google.com/people/BrainTeam.html. Compared to Appleseed's 12: http://appleseedhq.net/about.html.
Is it really that unfair to compare a 12-person MIT-licensed C++ project on GitHub to a 50-person Apache-licensed C++ project on GitHub? Or to remind everyone that 98% of open source projects fail?
> KDE's Konqueror began as a custom web browser for their desktop environment and easily could've been classified "yet another open-source timesink". But its engine, KHTML, became the foundation for WebKit.
KDE started in 1996, they wrote an HTML library, they didn't like it, they wrote a second version with a better architecture and ~10 developers. Appleseed doesn't seem to have that reactionary style of development or even that much thought on its design. I think it's easy to distinguish the two cases.
> If someone is interested in making something interesting, there's no reason to begrudge it.
Right. But I do begrudge them calling it "modern", when they have "no formal roadmap" (https://groups.google.com/forum/#!topic/appleseed-dev/wMA4oW...) and a long list of features to get to where SIGGRAPH was 5-10 years ago. If they took that one word out I wouldn't have such a problem.
Regarding the roadmap: we don't have a formal roadmap, but after each release (roughly every three months) we discuss and decide upon what we think would be the next logical steps, also taking into account which contributors will be participating and what are their areas of competence. There is a laundry list of features that any renderer must have to be considered usable by artists, and we're still missing some, so the road ahead is pretty clear, at least for a little longer.
Keep in mind that we all have day jobs and that appleseed is developed by a handful of volunteers in their free time. That probably explains, at least partly, why our progress is "slow".
- Mitsuba is a mainly-academic renderer by Wenzel Jakob (I say mainly because I haven't seen its widespread use in non-academic setting).
The two important open source renderers that are missing in comparison:
- Cycles (comes with Blender; produces great results although it is said that it only recently has started doing a "PBR workflow")
- Luxrender (very popular with artists on deviantart; has very good OpenCL support)
Almost no one uses LuxRender or Cycles commercially to the best of my knowledge (or appleseed for that matter). Hobbyists use them, which is cool.
I'm in Los Angeles and work in the industry. Here, people use Renderman, Arnold, and in-house stuff mostly. I don't think Renderman and Arnold are used much outside of VFX. Arnold in particular is pricey, but works amazingly well.
Disney has Hyperion and Weta has Manuka, although I'm not sure if they license them out to other shops.
Not that I know of. Many houses also have specialized in-house renders for voxels and occasionally, fluids.
Hella cool! Who is rolling their own outside of Weta?
DD? Animal Logic? Double Negative? Blue Sky is super proprietary, I'm curious what they are up to...
But more studios have their own full renderers these days:
Weta has Manuka
Disney has Hyperion
Animal Logic has Glimpse
Framestore has Van Damme
Sony has their fork of Arnold
Just a few things that popped into my mind looking at them. Wouldn't mind learning a little more in objectively evaluating the renderers.
The point of that page is to evaluate renderer correctness (think of them as visual unit tests), not to really compare "which one looks better". For example, some of the images show appleseed with hard shadows, when they should be soft.
If you want to know which one is "right", Mitsuba is the one to look at as it's generally the most correct.
The graininess comes from not letting the machine renderer longer—the longer it goes, the less grainy it will be. That said, all of these renderers employ "tricks" to reduce the graininess that can make it practically impossible to remove whatever graininess slips through.
I am not a rendering expert either but IIRC "physical rendering" works by shooting photons across the scene based on a random distribution, and in the end, the finished rendering is an average version of all photon passes. If you don't shoot enough photons, the averaged out version will be noisy because there will be some hotspots were significantly more photons hit, relative to the total photon count. More photons = longer rendering time, so the graininess is in the end probably caused by a time constraint enforced for each of the images.
The noise stems from the simple fact that those renderers try to solve an infinte dimensional integral (all light reflected by all surfaces) with stochastic Monte Carlo methods, the most popular being path tracing. There are other methods (finite element radiosity for example) to simulate light transport that do not exhibit stochastic noise, but those have fallen out of favor.
All these things play a crucial part in how efficiently a renderer can produce an image with as little noise as possible.
But it's also possible to look through the noise (ignore it), and look for issues - i.e. the Classroom scene is very odd as Appleseed doesn't have soft shadows (what light type was used? PhysicalSky/Sun? HDR IBL?), and the Tungsten illumination from the environment is less warm.
In practice, every renderer has different features and settings, so the actual scenes will only be approximately the same.
Thanks guys; I no longer need to go write my own path tracer.
(I say this because I really like the idea of things like Chunky (the pathtracer for Minecraft), but I'd rather be able to simply export data to a more mature, full-featured renderer. Hierarchical instancing means you can do things like make bricks out of grains of sand, make brick blocks out of bricks, etc, without /necessarily/ blowing your memory budget when rendering a large map.)
BTW it would be really great if the webmaster of gafferhq.org considered to add some contrast to the parts of the website that should be read by visitors.
(aka http://contrastrebellion.com/ )
Blender is not a renderer, it's a modelling and animation program with a built-in renderer, Cycles. But it can connect to other renderers, too.
Cycles is not that old. It replaced Blender's prior renderer just a few years ago.
I have contributed patches to Blender that got accepted. Even I feel the urge to write my own render engine, to try out different approaches and just so I can claim "I wrote that".
Kidding aside, as I understand it (I'm certainly no expert), the terms most often used are biased versus unbiased rendering, were biased has artificial limitations, while unbiased employs 'real world' calculations.
So why use biased renderers ?
Well they can typically create a very good result using much less time compared to a unbiased renderer, on the other hand they typically also require that you mess around with a lot of knobs in order to get good results, meanwhile with a unbiased renderer you can just set up a 'real world' scene and it will render as such (albeit more slowly).
My guess is that Renderman is the most widely used biased renderer today, with Arnold being the most used unbiased.
You can have biased physically-based rendering and you can have unbiased 'not quite physically-based' rendering - in the latter case for example, it's possible to render direct lighting only (so no secondary bounces, or global illumination), which while obviously not the correct real result in physical terms, to the extent that you're evaluating direct lighting only the result is technically unbiased in terms of light transport. Similarly, it's possibly to have a spectral renderer (which in theory should be more accurate) which is biased, and a non-spectral renderer (RGB only) which is unbiased.
Biased can mean things like taking short-cuts or approximations - e.g. irradiance caches for diffuse results, or caching occlusion in order to slightly bias which light to sample per vertex in order to sample lights more efficiently: Both of these biased techniques generally give faster less noisy renders, but it's possible you might not notice they are biased in terms of the effect they have on the render - it all depends on the scene, materials and the lighting. For simple scenes you probably won't notice, for more complex scenes with nested medium materials with glossy/pure specular responses or with refractive caustics, it's very likely you will notice the effects biased rendering has compared to unbiased rendering.
Renderman RIS can be set up to be unbiased (but with default settings - radiance clamps) it's not. Similarly Arnold's default settings of having a light threshold (under which it won't sample a light) is also biased, but obviously this setting can be changed. Arnold can also cache diffuse contributions on hair, which is also biased.
Non-physically-based rendering ("classic rendering"?) is using ad hoc tricks to produce convincing images, but without following a formal framework. While recent AAA games all more or less follow the physically-based paradigm, older 3D games didn't and simply used custom models and tricks to produce good looking imagery.