
Show HN: A raytracer to shade topographic maps in R - tylermw
http://tylermw.com/throwing-shade/
======
osrec
That's a cool project and a nice write up. I remember as a 13 yr old being
fascinated by 3D software. I think I just wanted to create something photo-
realistic with a computer. I couldn't afford 3DSMax, so started on the steep
learning curve with Blender. About the same time, they integrated Yafray into
Blender, resulting in some outstanding renders, similar in quality to
commercial packages. I was amazed and started delving into the maths - what a
fun and confusing journey it was! Five years later when the maths was finally
explained to me at university, I truly appreciated the work that went into
things like Blender and Yafray. So kudos to you for your raytracer!

~~~
gdubs
Aside: At various points I’d gotten deep/prolific in 3DS Max, Maya, Softimage,
Cinema 4D, and Lightwave — but to this day, I cannot grok blender. I never get
past using it for a day and feeling like the controls are completely against
my brain.

~~~
anchpop
When did you last try? I'm a small-time blender dev and lots of work is being
done to improve the usability. Any comments you have would be appreciated

~~~
osrec
I think the way split screening works is quite confusing. As a frequently
returning user, I often forget how to merge a split back and end up having to
Google it. Or without realising it, I end up splitting the screen too many
times. I wished that part was more intuitive. The fact that it isn't means I
am reluctant to use this rather powerful feature.

~~~
pasabagi
Funny - I was about to hop in and defend Blender, before I read your comment.
Splitting the screen is annoyingly fiddly.

In general, I actually like Blender's interface. I'd like it more if it was a
little more vimlike (all the screen splitting stuff could be under 'Ctrl-W',
for instance).

My problem with Blender is more about how easy it is to lose data. A reliable
autosave would be amazing.

------
abirler
I think the O(row * col * light_count * dist) algorithm could be optimized to
run in O(row * col * light_count).

The idea is, imagine the light is coming from the left, and our height map
consists of a single row that looks like (0,5,2,7,1).

By using the angle of the light, we can compute from left to right whether the
current point should be visible. We only need to store the last point that was
also visible since that is the only point that can block us from this light
source. For the given example, at point of height 2 we only need to care about
the previously visible point of height 5. If 2 isn't visible, 5 remains the
"highest" point, if 2 is visible, 2 becomes the "highest" point, there is no
danger that 5 will block something beyond two since 2 wasn't blocked. So at
each point we only have to check with one previous point, not all.

If the light isn't coming from exactly the left side, then we can tranform the
image so in such a way that the light is now from the left. This
transformation will take O(r * c) time per light.

Total complexity: O(transforms + visibilityChecks + mergingResults) = O(l * r
* c)

~~~
tylermw
One thing I didn't mention in the article are a few optimizations I made that
make it significantly less than O(row * col * light_count * dist). First, if
the ray exits the map region (either by going higher than the highest point on
the map, or outside the boundaries), it stops propagating that ray and calls
it a non-intercept (cutting down on the "dist" complexity). Second, if you're
drawing multiple rays for a finite sized light source, I first order them by
angle and send the rays out from the lowest to the highest angle. If a ray at
a lower angle doesn't intercept a surface, higher angles won't either. So you
save a lot on the "light_count" complexity.

------
leni536
Your raytraced shadow maps have a bit of a "glossy" feel, it suggests that you
don't use linear colorspace for rendering these. I think colormagick also
needs additional options to handle multiplication in linear colorspace. I
didn't look into any of your code, so I'm sorry if I'm wrong about this.

~~~
tylermw
I'll look into that, thanks for the note. My Imagemagick knowledge is by far
my weakest link.

~~~
a_e_k
Something like "-set colorspace RGB -colorspace sRGB" converts from linear RGB
to sRGB. Switch those two to go the other way as needed.

------
tylermw
If you missed it, the github repo for the R package is located here:

[https://github.com/tylermorganwall/rayshader](https://github.com/tylermorganwall/rayshader)

------
tabtab
I have a bit of a complaint. I've used something similar to the "derivative
based model" (DBM) for 3D shading before, and tested them on moonscapes.

The angle-to-brightness curve can be adjusted as needed to avoid under or
over-emphasizing mountain slopes. The "Andes" example is probably a case of
exaggerating topology to illustrate relative differences, and NOT in inherent
fault of slope-to-brightness maps (DBM-like). I can't say which one is more
practical without going to the actual slopes to check the look and feel of the
terrain and "feel" of the slopes. For technical accuracy, color coding is
probably better (color-per-height), but slope-shaded maps are easier for
regular folks to relate to.

Ray tracing is probably better when dealing with direct shadows and
reflections, such as found with buildings. But for high level maps with
mountains and rivers, I believe DBM is just fine if tuned right, and won't
differ noticeably from comparable ray tracing.

To avoid misinterpretations, the large-scale shading should more or less
mirror a semi-cloudy day such that direct shadows and reflections would play
an extremely minor role anyhow.

I should point out there is an element of subjectivity here. Illustrations
often exaggerate to emphasize various attributes of the illustration's target.
Often there is a trade-off in terms of aiding the quick mental digestion of
information versus "native" accuracy. Abstractions lie on purpose, or we
wouldn't call them "abstractions".

~~~
voidmain
It looks to me like the original gradient based shading code is just wrong.
Maybe there are numerical problems due to its frivolous use of trig functions,
or more likely the way it calculates gradients is just broken. But shading
based on slope should obviously make all zero slope areas (like water) the
same color, and the fact that it doesn't means it isn't working.

The raycasting code in this article, on the other hand, seems to be _missing_
the Lambert term! Just because a surface is not in shadow doesn't mean that it
will appear at uniform brightness. If it is diffuse/matte, which seems like a
reasonable starting point for most terrain, each light ray L should have a
contribution to the brightness of the point proportional to L dot N, where N
is the normal vector of the surface (and to find that you need, sorry, the
gradient). In short, light almost tangent to the surface doesn't illuminate it
much. Other kinds of surfaces will have different BRDFs (e.g. they can be
shiny) but none of them will have the completely unshaded behavior shown. Of
course, it's fine to do totally nonphysical things if it helps emphasize some
aspect of the data, but it doesn't seem to be done knowingly and I think it
produces a misleadingly flat look.

~~~
ffwacom
His dataset includes underwater topography

------
JorgeGT
This would be fantastic as a QGIS plugin, have you thought about applying to
their grant program?

~~~
tylermw
First time I've heard of QGIS, so nope! But it looks like they support C++
plug-ins, so it wouldn't be hard to port it over. Don't know anything about
their approval process, though.

~~~
JorgeGT
Here's a brief outline of the requirements:
[https://plugins.qgis.org/](https://plugins.qgis.org/) they advertise a 2
weeks review time maximum.

------
alkonaut
What’s really baffling about all things rendering is when you realize that
almost anything you can think of and optimize to render in 10ms with a ton of
sweat, was already done in 1ms in a game engine. Ten years ago.

~~~
olliej
Except that's not true is it?

3d rendering in games even today don't (in general) raytrace. While some
things are raytraced in games /today/, getting performance is still achieved
largely by leveraging the GPU, which in nowhere near as good at raytracing as
general rastering. Essentially GPUs make raytracing faster simply because it
is ridiculously parallel, and by limiting the kind of features that are
supported (vs. general cpu raytracing)

And also "cheating" \-- raytracing is ideally "correct", but for a game you
don't need to be correct, you just need to be good enough (or rather, you need
to be pretty, which isn't necessarily physically correct).

That said based on the content of the article this could be trivially improved
with better data structures -- it's just walking a height map, but the article
makes no mention of quad or kd trees which are iirc the standard go to for
height map traversal.

~~~
alkonaut
My comment came from the observation that making sampled tracing of a depth
buffer is similar to what (current, rasterizing) game engines do for ambient
occlusion.

[https://en.m.wikipedia.org/wiki/Screen_space_ambient_occlusi...](https://en.m.wikipedia.org/wiki/Screen_space_ambient_occlusion)

It’s correct that it cheats (samples a few samples and blurs the results)
instead of sampling adequately to get ”correct” results

~~~
olliej
SSAO is not remotely close to raytracing (as done here). This post is not
discussing ambient occlusion, but actual direction shadows.

SSAO doesn't sample a path across the depth buffer (because that wouldn't make
any sense at all for shading), or consider light direction. This is very much
my recollection from many many years ago, but basically my understanding is,
but I recall that it's just a random sample of depth information surrounding a
given fragment, and uses the depth of those samples relative to the current
fragment as a scaling factor for the ambient contribution to to the final
fragment light. The end result isn't even remotely close to correct, but
again, correct isn't the same as being looking good enough (or even just
looking good - in games you don't have totally control of how a scene may be
seen by a player. Incorrect lighting can often be far superior to correct
lighting from a gameplay pov.

------
Remnant44
Neat project!

I'm not sure if the author is familiar with ambient occlusion, but the method
described here can be extended to generate ambient occlusion data which helps
with a couple flaws. In particular:

\- Although the shading is very nice (see Figure 8), the long shadow rays are
distracting and are not very useful for cartography.

Ambient occlusion extends the idea presented here, except that instead of
sending rays towards a relatively small light source, consider the light
source to be the entire hemisphere of the sky, like on an overcast day. The
result is wonderfully soft light that conveys form much like Figure 8, but
without shadows.

Simply send out more rays to the entire sky hemisphere to create this effect.

------
bhouston
I think it is likely possible to do this in screenspace ultra fast on demand
using a shadow map / SSAO technique.

~~~
ska
In this application you're probably interested in something more accurate,
more than the speed. But it could be worth trying to see what interactive re-
mapping bought you.

------
pvillano
I made something similar that runs in real time in 30 lines of code.

------
zython
awesome stuff, how'd you make the illustrations (like Figure 6. )?

~~~
tylermw
Thanks! I drew the frames in Inkscape and animated them together using the
Scrollmagic JavaScript library.

~~~
zython
looks good. would love to see more !

~~~
tylermw
Check out the last few posts on my blog! I use this method a few times to help
demonstrate points:

[http://www.tylermw.com/visualizing-a-reddit-hug-of-death-
and...](http://www.tylermw.com/visualizing-a-reddit-hug-of-death-and-how-to-
reddit-proof-your-website-for-pocket-change/)

[http://www.tylermw.com/soma-water-filters-are-worthless-
how-...](http://www.tylermw.com/soma-water-filters-are-worthless-how-i-used-r-
to-win-an-argument-with-my-wife/)

