Hacker News new | past | comments | ask | show | jobs | submit login
Appleseed – open-source physically-based rendering engine (appleseedhq.net)
359 points by generic_user on Feb 4, 2017 | hide | past | web | favorite | 80 comments



Hello, founder of appleseed here!

Being the top story on Hacker News tonight was completely unexpected, but it's a good surprise and definitely appreciated publicity!

appleseed has been in active development since june of 2009. It predates a number of other open source renderers by quite a few years, including Cycles (another fantastic project!).

I'm a production rendering engineer (e-on software, mental images, NVIDIA, Jupiter Jazz...). I've started this project out of personal interest for rendering, and as a platform for learning (there's always tons of new stuff to learn), research and experiments. All other team members are professionals currently working in the industry.

appleseed is one of the few open source renderers designed for production rendering and targeted at animation and VFX. In addition to fully programmable shading via OpenShadingLanguage, strong support for motion blur and many other specific features, it supports accurate spectral rendering, which is quite a unique combination.

We still have a ton of work ahead to make it a truly competitive renderer but we're making regular progress: we're improving the core renderer every day, and lately we've been putting massive efforts in improving our integration with DCC apps and in achieving a comfortable workflow for artists. Loads left to do !

Let me finally add that I'm blessed to work with such a great team. Top quality work, persistently. We're a small but welcoming community, and contributions are most welcome!

Feel free to ask me anything!


I'm not in the industry, but I would like to say that I am highly encouraged by the way you described your offering.

You express a level of care, consideration, and diplomacy that is sadly lacking just about everywhere.

The product itself looks amazing; I love seeing this high bar for open source.


Thank you for the lovely words!

We certainly do put a lot of care and efforts into producing a high quality software product that is not only open source with a liberal license (MIT) but which is also developed in the open (we're happy to invite anyone to our Slack team at https://appleseedhq.slack.com where all development discussions and decisions take place).


Can you list the top five Influential research papers that have inspired you while working on Appleseed?

I have ACM library account so if they are ACM or Siggraph papers that is fine.

--edit

And thank you for making your project open source.


A very interesting question!

It's not a research paper, but for learning purposes there's nothing better than the Physically-Based Rendering (PBRT) book by Matt Pharr, Greg Humphreys and Wenzel Jakob, already mentioned by others in this thread: http://www.pbrt.org/

One of the foundational paper is definitely The Rendering Equation, by James Kajiya: https://inst.eecs.berkeley.edu/~cs294-13/fa09/lectures/p143-...

Another highly influencial research paper is certainly Eric Veach's PhD thesis, Robust Monte Carlo Methods For Light Transport Simulation (a.k.a. "The Bible"): http://graphics.stanford.edu/papers/veach_thesis/thesis.pdf

A few other paper stand out:

Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs, by Eric Heitz: http://jcgt.org/published/0003/02/03/. A somewhat difficult read, but an important paper.

Microfacet Models for Refraction through Rough Surfaces: http://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf

Physically-Based Shading at Disney: https://disney-animation.s3.amazonaws.com/library/s2012_pbs_...

A Practical Model for Subsurface Light Transport, by Henrik Wann Jensen: https://graphics.stanford.edu/papers/bssrdf/bssrdf.pdf

Light Transport Simulation with Vertex Connection and Merging: http://cgg.mff.cuni.cz/~jaroslav/papers/2012-vcm/

Finally, for learning, there's a nice lecture by John Carmack at QuakeCon: https://www.youtube.com/watch?v=IyUgHPs86XM


Excellent list, I had not seen some of these. Time to fire up Mathematica and get some coffee I suppose. And I'm sure the list is helpful for many others also.


taking a quick look at it now, reminds me a lot of Maxwell. Appleseed is quite an impressive renderer already!


Thanks! You're right, Maxwell and appleseed have a lot in common: both are (or can be) unbiased spectral renderers with animation rendering capabilities. They also both have a "studio" GUI application for scene composition, inspection and rendering.


2015 Interview with the lead developer: http://blenderdiplom.com/en/interviews/607-interview-francoi...

Seems like it's in around the same place, although the plugins are getting better and the renderer is starting to support complex scene features (not quite there yet).

Overall though there are 100's of ray-tracers and scene renderers (seemingly all in C++), so it's not clear if it has any compelling advantages.


I think the most powerful feature of C++ is templating. Templating makes it very easy to write high-performance math code (no dynamic function calls, everything unrolled by the compiler...).

Just have a look at the state of the art math libraries in rust and compare it to something like Eigen or Cgal. The C++ code is way more flexible and expressive than the rust code. If you don't believe me, check how the rust libraries handle matrix implementations. Often you will find specialized implementations of 1x1 to 4x4 matrices but no generic n-dimensional matrix code.


This will get better once we have type level integers. It's just not there yet. You're totally right that it's a weak point st the moment.


Cool, I think rust needs some time to mature.

I'm currently looking for a rust guide that shows me some programming patterns.

For example:

- How to best implement an observer pattern

- Best practices for vector code

- Best practices for tree implementations and how to implement a lambda on top of it.

I'm interested in small snippets so that I can get some initial productive code and progress from there.


I agree that more docs on this kind of thing will be good, but I think going about it this way might be harder than you'd expect. Applying existing patterns and writing data structures is actually harder in Rust than in many other languages, and people that try to start with it often get stuck. Rust's own patterns and best practices are still kinda evolving, hence your maturity comment.


Expression Templates are a big reason for Eigen's (and Blaze's) success compared to older linear algebra libraries -- I think something similar should be doable in Rust.

Most commercial software packages (e.g. Maya) expose their APIs via C++, though, which makes the use of other languages trickier.


There has been quite a lot of work done. You can check the release notes to see precisely what has been improved.

https://github.com/appleseedhq/appleseed/tags


Thanks! Our target is about one release roughly every three months. We've somehow managed to maintain this cadence for nearly 8 years now.


It's improving regularly, check the GitHub repo. Def. a project to keep an eye on if you're into rendering.


"Improving" is kind of a strong statement. See for example this recent PR: https://github.com/appleseedhq/appleseed/pull/1217/files. It seems like they just keep adding and removing special cases, so there's code churn but not really progress in any particular direction...


Perhaps I'm easier to please. The article your linked to was publish in October 2015.

Just this year they've added code[0] to support alSurface[1], which I consider to be a significant (and unexpected) feature improvement.

Their news page[2] (to me) shows significant improvements since late 2015 as well, and it looks like the 1.7.x beta is just around the corner[3].

[0] https://github.com/appleseedhq/appleseed/commit/8b0bb02112f7...

[1] http://www.anderslanglands.com/alshaders/index.html

[2] http://appleseedhq.net/news.html

[3] https://github.com/appleseedhq/appleseed/commit/3dfcad020e0f...


Yeah, there's some progress. But it's slow, compared to other projects. E.g. Tensorflow was released November 2015 and already has 1.7x as many commits and 25x as many committers, and companies like IBM are adding support for it to their toolkits: https://techcrunch.com/2017/01/26/ibm-adds-support-for-googl... (roughly like what Maya did for Arnold, although Autodesk just bought the whole company: https://www.youtube.com/watch?v=DzoHAhODpi8)

Maybe it's not possible to make a similar kind of pipeline/platform-type thing for renderers, and they really do need all that material/shader garbage, but I always have been kind of disappointed with the state of the art...


Hello there, "Math Nerd". While you're comparing Appleseeds and Orangepeels, I'd just like to point out that indeed we do need all this material/shader "garbage".

If you're disappointed with the state of the art, maybe you don't really understand the state nor the art?


Tensorflow is also managed by the most influential tech company in the world.


Right; Tensorflow is managed well, and working on it might lead to a job, while appleseed seems like yet another open-source time sink with few prospects.

Obviously money isn't everything, so maybe there's a better criteria for judging projects, but at least on the "lasting impact on the world" front appleseed seems to fall short...


>Right; Tensorflow is managed well, and working on it might lead to a job

Tensorflow's management is not relevant to its popularity. It's been well demonstrated that companies are tripping all over themselves to use anything put out by Google and/or Facebook, regardless of its applicability to the company's actual problem space or the quality of the product as compared to competitors. There are a lot of people out there just itching to find any excuse to blow millions of dollars deploying any open-source project touted by Google or Facebook, often blissfully unaware that these projects are born out of necessity, not amusement, and that Google/FB would've happily been using a mature, out of the box solution if it accommodated their needs.

The point is saying "Look at an independent guy's project. It's not even as active as some of Google's projects, and they're just one of the biggest companies in the world! Hah!" is really, really unfair, and doesn't say anything about anything.

>Obviously money isn't everything, so maybe there's a better criteria for judging projects, but at least on the "lasting impact on the world" front appleseed seems to fall short...

That the project does not now appear poised for world domination doesn't mean it's not significant, influential, or important, or that it won't eventually go on to have a larger-than-expected impact. This is particularly true if it explores an interesting or rarely-used paradigm, or is otherwise noteworthy for its technical excellence.

KDE's Konqueror began as a custom web browser for their desktop environment and easily could've been classified "yet another open-source timesink". But its engine, KHTML, became the foundation for WebKit. If someone is interested in making something interesting, there's no reason to begrudge it.


> It's been well demonstrated that companies are tripping all over themselves to use anything put out by Google and/or Facebook, regardless of its applicability to the company's actual problem space or the quality of the product as compared to competitors.

Angular being the quintessential example.


> There are a lot of people out there just itching to find any excuse to blow millions of dollars deploying any open-source project touted by Google or Facebook

Sure, but I was counting commits and contributors, which I don't think can be attributed solely to shallow business decisions.

> Look at an independent guy's project. It's not even as active as some of Google's projects, and they're just one of the biggest companies in the world! Hah!" is really, really unfair, and doesn't say anything about anything.

They're not that big, only 60,000 people or so AFAICT, compared to e.g. Wal-mart's 2.1 million. And then it's only ~50 people who worked on Tensorflow directly: https://research.google.com/people/BrainTeam.html. Compared to Appleseed's 12: http://appleseedhq.net/about.html.

Is it really that unfair to compare a 12-person MIT-licensed C++ project on GitHub to a 50-person Apache-licensed C++ project on GitHub? Or to remind everyone that 98% of open source projects fail?

> KDE's Konqueror began as a custom web browser for their desktop environment and easily could've been classified "yet another open-source timesink". But its engine, KHTML, became the foundation for WebKit.

KDE started in 1996, they wrote an HTML library, they didn't like it, they wrote a second version with a better architecture and ~10 developers. Appleseed doesn't seem to have that reactionary style of development or even that much thought on its design. I think it's easy to distinguish the two cases.

> If someone is interested in making something interesting, there's no reason to begrudge it.

Right. But I do begrudge them calling it "modern", when they have "no formal roadmap" (https://groups.google.com/forum/#!topic/appleseed-dev/wMA4oW...) and a long list of features to get to where SIGGRAPH was 5-10 years ago. If they took that one word out I wouldn't have such a problem.


We call appleseed "modern" because, since its inception in 2009, it implements the modern paradigm of rendering: unbiased, physically-based, programmable, as few knob and hacks as possible. The quintessential example of a "non-modern" ("classic"?) renderer is Pixar's PRMan before it dropped REYES, adopted path tracing and was renamed RenderMan.

Regarding the roadmap: we don't have a formal roadmap, but after each release (roughly every three months) we discuss and decide upon what we think would be the next logical steps, also taking into account which contributors will be participating and what are their areas of competence. There is a laundry list of features that any renderer must have to be considered usable by artists, and we're still missing some, so the road ahead is pretty clear, at least for a little longer.


so all software sucks unless it can land you a job ?


All software sucks, period (http://harmful.cat-v.org/software/). And occasionally I open HN and write a meandering description of why the current top software sucks. Apparently this is not taken well, so I guess I'll stop.


I think literature sucks, but I dont go to literature forums and criticize it. I just spend my time doing what I love.


To some extent, I agree with you! appleseed is surely no exception. We're just trying our best to make it suck as little as possible :)


It's an honor to even be compared to Tensorflow :)

Keep in mind that we all have day jobs and that appleseed is developed by a handful of volunteers in their free time. That probably explains, at least partly, why our progress is "slow".


Indeed, there is no shortage of open source ray tracers and renderers. That said, there are very few open source renderers with full OSL (https://github.com/imageworks/OpenShadingLanguage) support and the required features for animation and VFX works. I actually know only two (both actively developed): Cycles and appleseed.


Here are some render comparisons for the curious: http://appleseedhq.net/stuff/renderers-comparison/index.html


- I've never heard of Tungsten.

- Mitsuba is a mainly-academic renderer by Wenzel Jakob (I say mainly because I haven't seen its widespread use in non-academic setting).

The two important open source renderers that are missing in comparison:

- Cycles (comes with Blender; produces great results although it is said that it only recently has started doing a "PBR workflow")

- Luxrender (very popular with artists on deviantart; has very good OpenCL support)


That's because the point of the comparison is not to compare with other renderers, it's to check feature correctness. Mitsuba is widely used for that purpose.

Almost no one uses LuxRender or Cycles commercially to the best of my knowledge (or appleseed for that matter). Hobbyists use them, which is cool.

I'm in Los Angeles and work in the industry. Here, people use Renderman, Arnold, and in-house stuff mostly. I don't think Renderman and Arnold are used much outside of VFX. Arnold in particular is pricey, but works amazingly well.


Few people use Blender professionally, but those that do almost certainly will use Cycles. It's very capable now, quite similar to Arnold, but not as fast.


It's still desirable enough that it's getting commercial plugins (cycles4d).


> Here, people use Renderman, Arnold, and in-house stuff mostly.

Disney has Hyperion and Weta has Manuka, although I'm not sure if they license them out to other shops.


> I'm not sure if they license them out to other shops

Not that I know of. Many houses also have specialized in-house renders for voxels and occasionally, fluids.


> Many houses also have specialized in-house renders for voxels and occasionally, fluids

Hella cool! Who is rolling their own outside of Weta?

DD? Animal Logic? Double Negative? Blue Sky is super proprietary, I'm curious what they are up to...


A lot of studios used to have their own fluid / volume renderers, but it's less common these days as the main renderers can do volumes/implicits fairly well these days.

But more studios have their own full renderers these days:

Weta has Manuka

Disney has Hyperion

Animal Logic has Glimpse

Framestore has Van Damme

Sony has their fork of Arnold


Sick! Any idea what ILM is up to, or are they all Renderman all the time now?


Mostly Renderman RIS now from what I've heard, although there's still a bit of Arnold being used, and the VRay stuff DMP were using is mostly Clarisse now.


Tungsten is from one of Wenzel Jakob's former students, I think. He maintains a pretty cool set of models: https://benedikt-bitterli.me/resources/


How am I supposed to compare these three on the top as a non-expert in this area? The middle one looks noisier than the other two. I notice the one on the left has darker color in dresser which may be an advantage of correct, color handing but might be incorrect with one on right being brighter due to the correct handing of window's light on it. And what's up with the graininess of all of them?

Just a few things that popped into my mind looking at them. Wouldn't mind learning a little more in objectively evaluating the renderers.


The color differences are from different "tone mapping" implementations after the render is stopped and can be ignored.

The point of that page is to evaluate renderer correctness (think of them as visual unit tests), not to really compare "which one looks better". For example, some of the images show appleseed with hard shadows, when they should be soft.

If you want to know which one is "right", Mitsuba is the one to look at as it's generally the most correct.

The graininess comes from not letting the machine renderer longer—the longer it goes, the less grainy it will be. That said, all of these renderers employ "tricks" to reduce the graininess that can make it practically impossible to remove whatever graininess slips through.


You can't say any of these is "more correct" from looking at the renders. Those hard shadows stem from a difference in light size, which is almost certainly a problem with the scene parameters being treated differently. The light size may have been "lost in translation", turned to 0, creating a perfectly sharp shadow.


One limitation of appleseed today is that our physically-based sun model is a purely directional light, not a far away disk with a finite radius. This explains the hard shadows on some of the scenes.


> And what's up with the graininess of all of them?

I am not a rendering expert either but IIRC "physical rendering" works by shooting photons across the scene based on a random distribution, and in the end, the finished rendering is an average version of all photon passes. If you don't shoot enough photons, the averaged out version will be noisy because there will be some hotspots were significantly more photons hit, relative to the total photon count. More photons = longer rendering time, so the graininess is in the end probably caused by a time constraint enforced for each of the images.


Typically, it's not using photons but the other way round, paths are started from the camera primarily.

The noise stems from the simple fact that those renderers try to solve an infinte dimensional integral (all light reflected by all surfaces) with stochastic Monte Carlo methods, the most popular being path tracing. There are other methods (finite element radiosity for example) to simulate light transport that do not exhibit stochastic noise, but those have fallen out of favor.


Without knowing the settings which were used by each renderer (and thus stuff like the BSDF materials, sampling techniques, random number generator sequences, etc, etc), it's very difficult to say.

All these things play a crucial part in how efficiently a renderer can produce an image with as little noise as possible.

But it's also possible to look through the noise (ignore it), and look for issues - i.e. the Classroom scene is very odd as Appleseed doesn't have soft shadows (what light type was used? PhysicalSky/Sun? HDR IBL?), and the Tungsten illumination from the environment is less warm.


The classroom is lit by a physical sun+sky combination. A limitation of our current implementation of physical sun is that the sun is a directional light instead of being a far away disk of finite radius. This is why we don't have soft shadows in this scene.


In theory, for "unbiased path tracers", the results of all three should be the same, at infinite samples. Rendering for a finite amount of time, lower noise suggests the renderer is more efficient.

In practice, every renderer has different features and settings, so the actual scenes will only be approximately the same.


Hierarchical instancing YEAAHHH!!!!

Thanks guys; I no longer need to go write my own path tracer.

(I say this because I really like the idea of things like Chunky (the pathtracer for Minecraft), but I'd rather be able to simply export data to a more mature, full-featured renderer. Hierarchical instancing means you can do things like make bricks out of grains of sand, make brick blocks out of bricks, etc, without /necessarily/ blowing your memory budget when rendering a large map.)


If you like this you also might like Luxrender. Also an open source physically based render engine: http://www.luxrender.net/


This looks really beautiful. The fact that it's open source adds to that beauty. I'm not even interested in physically based rendering and it makes me want to play with it.


If you want to learn about state of the art rendering, check out pbrt.org. Even without the book, the source code contains a wealth of information and was written for teaching.


On the download page there is a bunch of demo scenes that you can simply open and render to see all of the gears and knobs turning and watch your CPUs light up.


Thank you! As generic_user said, feel free to just download a release for your platform and a few test scenes, start appleseed.studio, load a scene and hit F6 to start interactive rendering. The Getting Started page contains detailed instructions: http://appleseedhq.net/docs/tutorials/gettingstarted.html


If you're into graphics pipelines, here's a few interesting videos of Gaffer node networks rendering with Appleseed: http://www.gafferhq.org/demos/


how does gaffer compare to BMD Fusion? Thanks!

BTW it would be really great if the webmaster of gafferhq.org considered to add some contrast to the parts of the website that should be read by visitors.

(aka http://contrastrebellion.com/ )


There are quite a few interesting options for 3D, what is the best thing for 2D? Fast open source 2d rendering engine? Is Skia pretty much the only option?


I've seen Cairo and Antigrain used in the wild for 2D rendering, too.


It's all cool and awesome, but why put all those resources into new renderer, when one could contribute to Blender and such, which are way more mature?


Two points:

Blender is not a renderer, it's a modelling and animation program with a built-in renderer, Cycles. But it can connect to other renderers, too.

Cycles is not that old. It replaced Blender's prior renderer just a few years ago.


Appleseed has also been under active development for 7+ years; so not that young, especially considering that physically-based renders really only started coming into full production use 3-4 years ago. There are still many visual effects studios using approximation-based setups today, so I'd say there's lots of room for "new renderers" to carve their niche.


I know Blender is not a renderer, but they include 3 renderers in the package: Bledner Internal, Cycles and, Blender Game (This one is Real-Time though).


1. Because it's fun. 2. Because you're using a different program than blender. 3. Because you want to work on code that you know wouldn't get accepted in the blender code base anyway. 4. Because you can.

I have contributed patches to Blender that got accepted. Even I feel the urge to write my own render engine, to try out different approaches and just so I can claim "I wrote that".


Can someone Explain Like I'm Five: "Physically-based rendering"? As opposed to...?


Non-physically based rendering :)

Kidding aside, as I understand it (I'm certainly no expert), the terms most often used are biased versus unbiased rendering, were biased has artificial limitations, while unbiased employs 'real world' calculations.

So why use biased renderers ?

Well they can typically create a very good result using much less time compared to a unbiased renderer, on the other hand they typically also require that you mess around with a lot of knobs in order to get good results, meanwhile with a unbiased renderer you can just set up a 'real world' scene and it will render as such (albeit more slowly).

My guess is that Renderman is the most widely used biased renderer today, with Arnold being the most used unbiased.


That's not really what 'biased' and 'unbiased' mean - unfortunately the terms have been miss-used quite a bit, and are actually open to interpretation to a degree anyway.

You can have biased physically-based rendering and you can have unbiased 'not quite physically-based' rendering - in the latter case for example, it's possible to render direct lighting only (so no secondary bounces, or global illumination), which while obviously not the correct real result in physical terms, to the extent that you're evaluating direct lighting only the result is technically unbiased in terms of light transport. Similarly, it's possibly to have a spectral renderer (which in theory should be more accurate) which is biased, and a non-spectral renderer (RGB only) which is unbiased.

Biased can mean things like taking short-cuts or approximations - e.g. irradiance caches for diffuse results, or caching occlusion in order to slightly bias which light to sample per vertex in order to sample lights more efficiently: Both of these biased techniques generally give faster less noisy renders, but it's possible you might not notice they are biased in terms of the effect they have on the render - it all depends on the scene, materials and the lighting. For simple scenes you probably won't notice, for more complex scenes with nested medium materials with glossy/pure specular responses or with refractive caustics, it's very likely you will notice the effects biased rendering has compared to unbiased rendering.

Renderman RIS can be set up to be unbiased (but with default settings - radiance clamps) it's not. Similarly Arnold's default settings of having a light threshold (under which it won't sample a light) is also biased, but obviously this setting can be changed. Arnold can also cache diffuse contributions on hair, which is also biased.


Thanks for the correction!


Physically-based rendering means that we're essentially applying the law of physics to the problem of representing materials and bouncing light on them. The renderer deals with well defined physical quantities such as meters, watts, etc. To some extent, the renderer produces images that match real world experiments or predict what a setup would look like if it actually existed (predictive rendering).

Non-physically-based rendering ("classic rendering"?) is using ad hoc tricks to produce convincing images, but without following a formal framework. While recent AAA games all more or less follow the physically-based paradigm, older 3D games didn't and simply used custom models and tricks to produce good looking imagery.


what happened with the Ramen compositor?


No word on GPU acceleration?


The name clashes with a well known anime movie, which also uses CGI. Maybe change the name of this engine?


The renderer is lowercase is that purpose. ;)


Think of it has a homage! :)


(as a homage, sorry)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: