Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A raytracer to shade topographic maps in R (tylermw.com)
198 points by tylermw on May 14, 2018 | hide | past | favorite | 47 comments



That's a cool project and a nice write up. I remember as a 13 yr old being fascinated by 3D software. I think I just wanted to create something photo-realistic with a computer. I couldn't afford 3DSMax, so started on the steep learning curve with Blender. About the same time, they integrated Yafray into Blender, resulting in some outstanding renders, similar in quality to commercial packages. I was amazed and started delving into the maths - what a fun and confusing journey it was! Five years later when the maths was finally explained to me at university, I truly appreciated the work that went into things like Blender and Yafray. So kudos to you for your raytracer!


I can really recommend Matt Pharr’s “Physically based rendering” if you like to get lost in this kind of thing for a few hours.


Thanks, will give that a look when I next find some exploration time :)


Aside: At various points I’d gotten deep/prolific in 3DS Max, Maya, Softimage, Cinema 4D, and Lightwave — but to this day, I cannot grok blender. I never get past using it for a day and feeling like the controls are completely against my brain.


I've felt the same about Blender for years until I found this tutorial series: https://m.youtube.com/playlist?list=PLjEaoINr3zgHs8uzT3yqe4i...

It teaches basic modeling and texturing but in a way where you start picking up on the shortcuts and interface slowly over the course. There's not much that's intuitive about the UI but it finally makes sense to me.


I know what you mean. I think even their reworked UI was problematic (at times even difficult to visually discern what the heck was going on). But eventually it does sort of make sense - kind of the way vi eventually sits well with some but hated by others.


It is amazingly terrible. 3D software competes heavily on interface. Both Softimage and Maya have great interfaces for most things. Houdini is not lacking for interface refinement and well thought out design either.

Blender is another story. Its interface is a hazard, I can't even understand why it is SO terrible. Even saving files becomes a rabbit hole of nonsense due to how buttons are positioned and assumptions you make about what would be sane. It almost seems like an elaborate practical joke.


I've used blender for about 14 odd years. I stream it from time to time, if you're interested in seeing how it looks with that much casual time behind it or you want a lesson or two, send me a PM and I'll demo for you. :)


I’m curious - was it your first 3D software? I’m wondering if it’s a difference of learning from the ground up on it vs coming from other packages.


It actually was not. My first 3d software was milkshape 3d and quark for q2 models and map making, respectively.

For money burning reasons I had to use 3dsMax when I was making games for the army. It had been years since I touched it, but a few days and it was fine enough for my purposes. Still wish they'd have let me use blender though.


When did you last try? I'm a small-time blender dev and lots of work is being done to improve the usability. Any comments you have would be appreciated


I think the way split screening works is quite confusing. As a frequently returning user, I often forget how to merge a split back and end up having to Google it. Or without realising it, I end up splitting the screen too many times. I wished that part was more intuitive. The fact that it isn't means I am reluctant to use this rather powerful feature.


Funny - I was about to hop in and defend Blender, before I read your comment. Splitting the screen is annoyingly fiddly.

In general, I actually like Blender's interface. I'd like it more if it was a little more vimlike (all the screen splitting stuff could be under 'Ctrl-W', for instance).

My problem with Blender is more about how easy it is to lose data. A reliable autosave would be amazing.


I tried again a couple of months ago. The only way I can describe it is that I have a muscle memory with 3D software that largely transfers between most packages. With Blender I feel frozen in my tracks.


The 2.8 update looks promising. Though I actually don't know if will actually be better


Thanks for the comment! It's pretty incredible what you can do with raytracing nowadays, using only open source software.


I think the O(row * col * light_count * dist) algorithm could be optimized to run in O(row * col * light_count).

The idea is, imagine the light is coming from the left, and our height map consists of a single row that looks like (0,5,2,7,1).

By using the angle of the light, we can compute from left to right whether the current point should be visible. We only need to store the last point that was also visible since that is the only point that can block us from this light source. For the given example, at point of height 2 we only need to care about the previously visible point of height 5. If 2 isn't visible, 5 remains the "highest" point, if 2 is visible, 2 becomes the "highest" point, there is no danger that 5 will block something beyond two since 2 wasn't blocked. So at each point we only have to check with one previous point, not all.

If the light isn't coming from exactly the left side, then we can tranform the image so in such a way that the light is now from the left. This transformation will take O(r * c) time per light.

Total complexity: O(transforms + visibilityChecks + mergingResults) = O(l * r * c)


One thing I didn't mention in the article are a few optimizations I made that make it significantly less than O(row * col * light_count * dist). First, if the ray exits the map region (either by going higher than the highest point on the map, or outside the boundaries), it stops propagating that ray and calls it a non-intercept (cutting down on the "dist" complexity). Second, if you're drawing multiple rays for a finite sized light source, I first order them by angle and send the rays out from the lowest to the highest angle. If a ray at a lower angle doesn't intercept a surface, higher angles won't either. So you save a lot on the "light_count" complexity.


Your raytraced shadow maps have a bit of a "glossy" feel, it suggests that you don't use linear colorspace for rendering these. I think colormagick also needs additional options to handle multiplication in linear colorspace. I didn't look into any of your code, so I'm sorry if I'm wrong about this.


I don't know about "glossy" (this is not my field) but I did feel the way that the shadows diffused (considering moving from the object casting the shadow to the shadows terminus) seemed ... unnatural?

There seem to be odd contours, banding, striations. Perhaps it is the gamma-thing that you point out — the blackness falling off too quickly. Or perhaps the algorithm has been overly optimized and the math is taking unnatural shortcuts.


I'll look into that, thanks for the note. My Imagemagick knowledge is by far my weakest link.


Something like "-set colorspace RGB -colorspace sRGB" converts from linear RGB to sRGB. Switch those two to go the other way as needed.


If you missed it, the github repo for the R package is located here:

https://github.com/tylermorganwall/rayshader


I have a bit of a complaint. I've used something similar to the "derivative based model" (DBM) for 3D shading before, and tested them on moonscapes.

The angle-to-brightness curve can be adjusted as needed to avoid under or over-emphasizing mountain slopes. The "Andes" example is probably a case of exaggerating topology to illustrate relative differences, and NOT in inherent fault of slope-to-brightness maps (DBM-like). I can't say which one is more practical without going to the actual slopes to check the look and feel of the terrain and "feel" of the slopes. For technical accuracy, color coding is probably better (color-per-height), but slope-shaded maps are easier for regular folks to relate to.

Ray tracing is probably better when dealing with direct shadows and reflections, such as found with buildings. But for high level maps with mountains and rivers, I believe DBM is just fine if tuned right, and won't differ noticeably from comparable ray tracing.

To avoid misinterpretations, the large-scale shading should more or less mirror a semi-cloudy day such that direct shadows and reflections would play an extremely minor role anyhow.

I should point out there is an element of subjectivity here. Illustrations often exaggerate to emphasize various attributes of the illustration's target. Often there is a trade-off in terms of aiding the quick mental digestion of information versus "native" accuracy. Abstractions lie on purpose, or we wouldn't call them "abstractions".


It looks to me like the original gradient based shading code is just wrong. Maybe there are numerical problems due to its frivolous use of trig functions, or more likely the way it calculates gradients is just broken. But shading based on slope should obviously make all zero slope areas (like water) the same color, and the fact that it doesn't means it isn't working.

The raycasting code in this article, on the other hand, seems to be missing the Lambert term! Just because a surface is not in shadow doesn't mean that it will appear at uniform brightness. If it is diffuse/matte, which seems like a reasonable starting point for most terrain, each light ray L should have a contribution to the brightness of the point proportional to L dot N, where N is the normal vector of the surface (and to find that you need, sorry, the gradient). In short, light almost tangent to the surface doesn't illuminate it much. Other kinds of surfaces will have different BRDFs (e.g. they can be shiny) but none of them will have the completely unshaded behavior shown. Of course, it's fine to do totally nonphysical things if it helps emphasize some aspect of the data, but it doesn't seem to be done knowingly and I think it produces a misleadingly flat look.


His dataset includes underwater topography


I do address all of your concerns in the article.

For the angle-to-brightness curve, one of my footnotes says:

"I believe his code only lacks a step that implements a minimum threshold on the slope before it gets colored--a step that would remove the problems seen in Figure 1 in flat areas."

With regards to misinterpretations:

"There's a tradeoff between realism and actual topographic information--real shadows can tell you about the relative height of two objects, but nothing about the slope of the land in the shadow itself."

(I also mention how you can adapt the rayshading technique for a local model that doesn't have that problem)

And later on in that paragraph, about subjectivity:

"Thirdly, cartography has an artistic and practical component, and it’s true what reliefshading.com said in the non-computational shade it threw at global models–sometimes, you need to tweak the direction of the light hitting a particular mountain or valley because it looks better."


I still have to conclude the Andes example as given is misleading. It strongly implies ray-tracing "fixes" the problems shown. A footnote alone can't fix that (or is the wrong way to fix it). This clause under the picture is particularly problematic: "but it's disconnected from the underlying physics of what makes real shadows occur: rays of light not hitting a surface due to another surface in the way." -- As I mentioned, you often DON'T want that effect for the type of map shown, and that alone is not the main cause of the differences in the illustration. I'd bet money I can tweak the brightness-mapping curve to make the left look very similar to the right. (As somebody else mentioned, the actual curve depends on weather and local textures etc. such that both techniques are ultimately rough approximations.)


I have the same complaint. I couldn't think of a polite way to put it, so this won't be polite, but whenever someone compares their algorithm with a poor version of a competing technique, it makes me question their integrity and results.


This would be fantastic as a QGIS plugin, have you thought about applying to their grant program?


First time I've heard of QGIS, so nope! But it looks like they support C++ plug-ins, so it wouldn't be hard to port it over. Don't know anything about their approval process, though.


Here's a brief outline of the requirements: https://plugins.qgis.org/ they advertise a 2 weeks review time maximum.


What’s really baffling about all things rendering is when you realize that almost anything you can think of and optimize to render in 10ms with a ton of sweat, was already done in 1ms in a game engine. Ten years ago.


Except that's not true is it?

3d rendering in games even today don't (in general) raytrace. While some things are raytraced in games /today/, getting performance is still achieved largely by leveraging the GPU, which in nowhere near as good at raytracing as general rastering. Essentially GPUs make raytracing faster simply because it is ridiculously parallel, and by limiting the kind of features that are supported (vs. general cpu raytracing)

And also "cheating" -- raytracing is ideally "correct", but for a game you don't need to be correct, you just need to be good enough (or rather, you need to be pretty, which isn't necessarily physically correct).

That said based on the content of the article this could be trivially improved with better data structures -- it's just walking a height map, but the article makes no mention of quad or kd trees which are iirc the standard go to for height map traversal.


My comment came from the observation that making sampled tracing of a depth buffer is similar to what (current, rasterizing) game engines do for ambient occlusion.

https://en.m.wikipedia.org/wiki/Screen_space_ambient_occlusi...

It’s correct that it cheats (samples a few samples and blurs the results) instead of sampling adequately to get ”correct” results


SSAO is not remotely close to raytracing (as done here). This post is not discussing ambient occlusion, but actual direction shadows.

SSAO doesn't sample a path across the depth buffer (because that wouldn't make any sense at all for shading), or consider light direction. This is very much my recollection from many many years ago, but basically my understanding is, but I recall that it's just a random sample of depth information surrounding a given fragment, and uses the depth of those samples relative to the current fragment as a scaling factor for the ambient contribution to to the final fragment light. The end result isn't even remotely close to correct, but again, correct isn't the same as being looking good enough (or even just looking good - in games you don't have totally control of how a scene may be seen by a player. Incorrect lighting can often be far superior to correct lighting from a gameplay pov.


This doesn't need to be done with actual ray triangle intersections.

There are many techniques relating to relief mapping that deal with ray marching through a depth map. Here is a pdf:

http://developer.download.nvidia.com/books/gpu_gems_3/sample...

This technique would actually make it run very fast on a GPU, likely with better quality as well.


It's completely true.

The term to google is "Parallax Occlusion Mapping," which was used for several extremely impressive (for the time) GPU demos in 2004.

EDIT: https://developer.amd.com/wordpress/media/2012/10/I3D2006-Ta...


Neat project!

I'm not sure if the author is familiar with ambient occlusion, but the method described here can be extended to generate ambient occlusion data which helps with a couple flaws. In particular:

- Although the shading is very nice (see Figure 8), the long shadow rays are distracting and are not very useful for cartography.

Ambient occlusion extends the idea presented here, except that instead of sending rays towards a relatively small light source, consider the light source to be the entire hemisphere of the sky, like on an overcast day. The result is wonderfully soft light that conveys form much like Figure 8, but without shadows.

Simply send out more rays to the entire sky hemisphere to create this effect.


I think it is likely possible to do this in screenspace ultra fast on demand using a shadow map / SSAO technique.


In this application you're probably interested in something more accurate, more than the speed. But it could be worth trying to see what interactive re-mapping bought you.


I made something similar that runs in real time in 30 lines of code.


awesome stuff, how'd you make the illustrations (like Figure 6. )?


Thanks! I drew the frames in Inkscape and animated them together using the Scrollmagic JavaScript library.


I do not recommend it. It gets irritating if you want to inspect some version of the image, but then the image moves up and down, instead of being static.

My suggestion, A simple GIF would work fine. Or if you want to get fancy you can do interactive demos like on this blog: https://www.redblobgames.com/articles/visibility/


looks good. would love to see more !


Check out the last few posts on my blog! I use this method a few times to help demonstrate points:

http://www.tylermw.com/visualizing-a-reddit-hug-of-death-and...

http://www.tylermw.com/soma-water-filters-are-worthless-how-...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: