Hacker News new | past | comments | ask | show | jobs | submit login
Porting a Ray Tracer to Rust, Part 1 (willusher.io)
73 points by adamnemecek on Dec 31, 2014 | hide | past | favorite | 39 comments



I also did a similar thing with SmallVCM, another "educational" ray-tracer: https://github.com/yuriks/SmallVCM-rs

There is also a written report with my experience and benchmark results, but unfortunately it's in Portuguese: https://github.com/yuriks/tg-yuriks-2014b/blob/master/final/...

One thing that surprised me a lot is that Rust ended up being noticeably faster than C++ (around 20%) even when compared with clang.


Do you mean compared to C++ compiled with LLVM (as opposed to GCC)?

I've found that LLVM is generally faster than GCC and in some cases significantly so.


Yes, I compared Rust and Clang using similar LLVM versions and the Rust version of the code consistently beat Clang.


Not the same version then? :)


For the moment Rust is using a custom fork of LLVM while waiting for its patches to be upstreamed.


Hasn't this been the case like forever (at least for the past two years)?


They keep coming up with new patches to upstream. :)


Some thoughs about composition and inheritance.

What do you think about handling the missing `instance` member from DifferentialGeometry by splitting it into two types, PreDifferentialGeometry without this member, and DifferentialGeometry which has two members: the Pre- struct and the instance member?

This way we move the missing `instance` information from the value space to the type space.

If there are significant methods called on both the Pre- and full version of the struct, we can even impl Deref on DifferentialGeometry so that it calls methods on PreDifferentialGeometry automatically.


Author here, I'm not quite sure I follow. Would this then have the Sphere's and Instance's intersect implementations differ, one returning PreDifferentialGeometry and one returning DifferentialGeometry? I thought about changing the instance implementations to return different types but later when we want to put both our geometry instances and geometry itself (triangles in a mesh) into bounding volume hierarchies I think we'd run into trouble.


Is it all that important to return the instance in the DifferentialGeometry? It seems more straightforward to just apply the appropriate inverse transform on hit point and normal vector on the way back out.

In my ray-tracer (Glome), I treat instances as just a container for some other object along with a transformation matrix, that itself behaves like a regular object. Every kind of object has a method to construct a bounding box, so that when building an acceleration structure that contains instances, if I remember correctly, I take the 8 corners of the bounding box of the contained object, transform them, and constructs a new bounding box around the transformed points. It's not optimal, but it works okay.

Another way of working around your problem is to just pass the current instance (or a list of instances) as an argument to your ray intersection function. (I use a similar trick for textures, and also for a tagging scheme where I can apply arbitrary tags to objects and then pass back a list of all the tags of objects that the ray hit when I return a ray intersection.)

https://github.com/jimsnow/glome


It sounds like our instance types will end up being a bit different. Right now we don't need to send the hit instance back with the differential geometry since it doesn't tell us anything extra but later we'll attach different materials and lights to instances of the same geometry and will then need access to the hit instance so we can compute the material properties.

Sending the instance to the geometry to fill it out in the struct is a good idea, if Rust had default parameter support I'd do this and have the instance's intersect just default to None and pass itself to the geometry's intersect. Then the functions would have the same signature and we could mark the case of a None instance in geometry as unreachable.


Would GeometryInstances go directly into an acceleration structure? Surely objects which point to the geometryinstance and each have a worldspace transform are what go in the scene level acceleration structure...

Generally it's advisable to templatize acceleration structures so they can natively cope with triangles, objects and possibly other primitives (spheres and curves) separately.


Right, the instances go into a scene level acceleration structure and for each triangle mesh we build an object level acceleration structure on the triangles. I was planning to do something like a BVH<T: Geometry> so then I don't need a separate BVH type for instances and triangles, since Instance implements the Geometry trait. I'm not sure if there's a better alternative that would help get around the DifferentialGeometry issue since in this case the instance members of geometry types (Spheres, Triangles) and Instances must have the same signature.


So one important question about Rust that is relevant here is how it handles dynamic stack allocation. It is in the language? C++ has alloca of course but needs a couple of macros to work across compilers. Memory allocation ends up being a very big performance concern in renderers. PBRT uses both memory arenas and an ALLOCA macro.


It's quite possible to not use alloca(), but just use C++'s placement new operator to re-use memory already allocated on the stack.

I'm not sure if Rust has something similar to placement new, but I'm sure custom allocators (for memory arenas) are possible in Rust.


User-defined placement new and custom allocators for existing data structures are still being worked on. `box` can do a form of placement new already, and while you can use your own allocator, you can't use it with already-existing types.


That is a valid and useful technique, but being able to dynamically allocate stack memory in the first place is separate from using placement new on stack memory.


Of course, I was just pointing out it's not always necessary to use alloca - I've written a production-level path-tracer without needing to allocate stack memory dynamically.


I might be mistaken about this, but I believe they plan on implementing support for this at some point.


It's on the todo list.


Why reimplement linear algebra? Rust has got that part very well covered, for example using nalgebra: https://github.com/sebcrozet/nalgebra


What is physically based rendering? What do the terms physically and based mean here?


In addition to other answers, physically based rendering pragmatically means that renders have the potential to be a correct simulation of photography up to a certain point. For many many years rendering for film didn't even use ray tracing at all. Lights were a single point in space with no area and shadows were done with shadow maps. Now most film rendering has shifted to using area lights, ray traced visibility to different points on the light, and shaders that (mostly) preserve energy (they can't reflect more light than what is incoming). Even game engines like Unreal 4 take these into consideration and make very crafty and pragmatic tradeoffs to maintain speed and the principles that give an increase in realism.


Fun fact: in most physically based ray tracers, the photons are shot by the camera, bounce on various objects, and finally hit a light (or a sky box), whose energy is then back propagated along the chain of bounces. In physics everything is reversed, the photons are generated by a light, and bounce on various objects (which change their energy by absorbing it) before ending up in a camera sensor. In fact, some advanced ray tracers follow this second model, but they are much much harder to build, and, in general, require higher computational power.

If you are looking for a good (you can build your own ray tracer) but not too hostile intro to physics based rendering I recommend "Advanced Global Illumination" by Dutre and others (http://www.amazon.com/Advanced-Global-Illumination-Second-Ed...). It is actually quite a fascinating computer science application, and you can produce beautiful images with it.


Erm... Photon mapping uses "forward" path tracing to trace photons from the lights, not the camera.

In conventional (uni-directional) path tracing, rays are shot from the camera only, and bounce around the scene (other than for light visibility testing).

Bi-directional path tracing uses rays originating at both the camera and light sources going in opposite directions.

VCM is a combination of Bi-directional path tracing and photon mapping.


I think he was talking about rays when he said 'photons', not photon mapping in regards to the specific technique.


It basically means you try to model your rendering engine as close to the real world as possible. That in-turn implies that every aspect of your rendering engine is analogous to how light and real world objects really behave.


Thanks!



So a more grammatically correct term can be "physics based rendering"?



Yes, I have seen that. This is why asked what those words meant. The words "physically" and "based" as they are used here do not fit my understanding of the English grammar. I am not a native English speaker. I just want to know why is this sequence of words meaningful?


I'm a native English speaker, but have no experience with rendering or this topic.

To my ears, "Physically Based Rendering" does sound a little stilted. I don't immediately have an intuitive grasp of what it means based on words alone.

It doesn't sound like it means "physics based rendering", which would mean "a rendering process that uses physics at its core."

Rather, the attachment I have to the word "physically" is more along the lines of "metaphorical vs. physical" (i.e. "concrete", "real", "observable"). So along those lines, my natural understanding of "Physically based" rendering would be a process not based on abstract mathematical principles, but one that tries to take the physical properties of objects into account. So, maybe to render a rock you'd think about what it's made of, if moss grows on it, etc.

(I don't know if that's actually what PBR means, just that's what it sounds like to me as a native speaker.)


"Physically based" rolls off the tongue more easily, and the adverb "physically" is modifying the verb "based", and then "physically-based" or "physically based" combines into an adjective or sometimes an adverb.


One of the main concepts in physically based rendering, more than whether it uses path tracing or ray tracing or global illumination, is the notion of correctly modeled energy conservation with respect to the rendering of surfaces.

have a look at this link: https://support.solidangle.com/display/AFMUG/Standard


Rather ironically, Arnold's standard shader still has bugs meaning its Cook-Torrance microfacet model isn't completely energy conserving at glancing angles...


Interesting. It's been around long enough you'd think they would have worked out those details. I just linked it because I felt it gave the clearest explanation of the concept.


"Physically Based" has somewhat turned into the CG/gamedev version of "Big Data". I remember reading a paper about physically based facial animation or modelling. Which uses the concepts of facial tissue and muscles to realistically render faces. It went all downhill after that though. We'll probably soon see a re-release of asteroids called "Physically Based Asteroids" because there is a multiplication in there somewhere that reminded the author of Newtons second law.


In rendering specifically it isn't nearly as much of an ambiguous buzzword.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: