It's also a loving testament to our era of content generation that a frequent task in creating a children's film is solving complex scientific or engineering problems. They recreate so many of the strange nuances and hidden beauty of our world (anisotropic rendering, Navier–Stokes equations, subsurface scattering, caustics, flocking, dendritic growth of snowflakes, global illumination, ...) just so they can render the purest representation of it for us.
I genuinely am so glad for the time I spent in art and computer graphics, though neither have much relevance to my career now, simply as it changed the way I look at the world. When you want to try to recreate a real life scene from scratch you realize just how much happens in the real world without you ever appreciating it.
: Even after the physics and science above: Do people frequently travel down this path? Might it become a holloway? Does the roof guttering drain out here? There'll likely be moss or wearing paint. Old stone steps of an ancient tavern? The stone will likely be worn away at the center or on either side of the door. The background scenery in a modern computer graphics film has dozens or hundreds of hours in them and will be filled with nuance and/or in-jokes. If you need to create a background world in any appreciable detail, why not use it as a canvas to keep telling a story?
I wish there was a website collecting all these stories and articles. They could also ask the authors for permission and offer epub/pdf downloads of these stories. That would be a nice read while traveling
With all due respect, one convenient website aggregating all that material is not a practical idea. It's like asking for one huge magazine to exist which contains everything worth reading about a huge field like CS.
Edit: I mean, it would be convenient, but there's no royal road to finding stuff that's worth reading. https://queue.acm.org is pretty good, for example
NetFlix for text != zero curation, NetFlix curates their selection.
They're NetFlix for books and formal periodical publications, both of which are more movie-like: there are a relatively small number of publishers and publishing a full-length book is a substantial, multi-person project, as is publishing a magazine or newspaper. There's a huge wealth of text out there that has no chance of ever hitting any library or central seller/distributor of etexts.
Judging by the stuff that comes up when you browse categories, or when NetFlix wants to suggest something to you, the only "curation" involved is whether they were able to get rights to display the content. There's no quality standard, and there's no logical standard (like "if we offer the second film in this series, maybe we should offer the first one too") either.
- The first and third book are currently checked out.
- The library acquired all four books, but the first and third have been destroyed / lost / stolen by customers.
- The library didn't acquire any of the books, but the second and fourth were donated and the library chose to absorb them rather than selling or destroying them.
None of those are even conceptually applicable to NetFlix. I definitely would not expect
- The library has a limited budget, and considered that it would be better spent buying the second book than the first book.
Note that NetFlix's inventory of physical discs doesn't suffer from the same problems that its streaming inventory does. That is (most likely) because it's trivial to obtain the legal right to distribute the physical discs -- you just buy them on the open market, the same way a library does with its books. NetFlix's streaming inventory isn't "curated", it's not under NetFlix's control at all.
"However, some earlier phase of the production pipeline used a fixed-size buffer for storing object names and would shorten any longer names, keeping only the end and helpfully adding a few periods to show that something had been lost: “…year/LeftArm/Hand/IndexFinger/Knuckle2”.
Thence, all of the object names the renderer saw were of that form, the hash function hashed all of them to the bucket for ‘.’, and the hash table was actually a big linked list. Good times. "
Basically they have a few different techniques that couldn't
be directly translated into PBRT, like lights that only hit certain objects, so the lighting is not as good in the PBRT renders, along with a few other things. Nothing super advanced, just some things that were added to their renderer for artistic control reasons that PBRT doesn't have.
then watch it a second time for the animation and rendering....
Is this something that Pixar's USD  format would help with?
If folks around here have experience with it, I'd love to see a discussion on that.
Implementing better mesh handling is an excellent exercise for the reader.
As an aside, the book "Physically Based Rendering", which is the source of pbrt, is _by far_ the best programming book I've ever read.
So many books spend a lot of time on trivial stuff, while others dive into the difficult theory and math but doesn't show how to translate it into code.
PBR goes through the difficult theory and math in detail while also showing how to implement it in code, explaining potential pitfalls and tricks along the way. And it does so in an elegant and easy to understand way.
And if you got some questions, the authors answers questions in a Google Group.
I'm joking, thanks for suggesting that book, I don't know much about this stuff but i might actually read it!
Since a physically-based ray tracer involves a lot of different theory and math, it can be quite interesting even if your not in it for the actual ray tracing. For example there's sampling low-discrepancy sequences, kd-trees and other bounding volumes, color representation and conversion etc.
As noted in other comments, this was a case where the pedagogical goals of the system outweighed maximizing performance.
It's really incomplete, and I have had hardly any time to work on it over the many years that it's been around, but the general idea is to type construction to determine the aspects that a scene actually uses and let the compiler sort out what to do about it.
For example, the depth count attached to a ray doesn't appear until it hits a reflective surface so there is no overhead if the scene doesn't need it. As you can probably imagine the template errors can be obscene :)
The next thing would be to constexpr everything so that the scene could be built up in the compiler, I guess ideally in parts and then combined by the linker. I think it'll be really fun to try that with this Moana scene, but don't expect the compilers or linkers will survive let alone the tracer
EDIT to say, I really love your book. I feel hugely inspired every time I look at it, which unfortunately means I can't look too often as I don't have the time to play with these things as much as I'd like.
Did Disney release these stats for their Hyperion renderer? I would be curious where they spend all their time.