It teaches basic modeling and texturing but in a way where you start picking up on the shortcuts and interface slowly over the course. There's not much that's intuitive about the UI but it finally makes sense to me.
Blender is another story. Its interface is a hazard, I can't even understand why it is SO terrible. Even saving files becomes a rabbit hole of nonsense due to how buttons are positioned and assumptions you make about what would be sane. It almost seems like an elaborate practical joke.
For money burning reasons I had to use 3dsMax when I was making games for the army. It had been years since I touched it, but a few days and it was fine enough for my purposes. Still wish they'd have let me use blender though.
In general, I actually like Blender's interface. I'd like it more if it was a little more vimlike (all the screen splitting stuff could be under 'Ctrl-W', for instance).
My problem with Blender is more about how easy it is to lose data. A reliable autosave would be amazing.
The idea is, imagine the light is coming from the left, and our height map consists of a single row that looks like (0,5,2,7,1).
By using the angle of the light, we can compute from left to right whether the current point should be visible. We only need to store the last point that was also visible since that is the only point that can block us from this light source. For the given example, at point of height 2 we only need to care about the previously visible point of height 5. If 2 isn't visible, 5 remains the "highest" point, if 2 is visible, 2 becomes the "highest" point, there is no danger that 5 will block something beyond two since 2 wasn't blocked. So at each point we only have to check with one previous point, not all.
If the light isn't coming from exactly the left side, then we can tranform the image so in such a way that the light is now from the left. This transformation will take O(r * c) time per light.
Total complexity: O(transforms + visibilityChecks + mergingResults) = O(l * r * c)
There seem to be odd contours, banding, striations. Perhaps it is the gamma-thing that you point out — the blackness falling off too quickly. Or perhaps the algorithm has been overly optimized and the math is taking unnatural shortcuts.
The angle-to-brightness curve can be adjusted as needed to avoid under or over-emphasizing mountain slopes. The "Andes" example is probably a case of exaggerating topology to illustrate relative differences, and NOT in inherent fault of slope-to-brightness maps (DBM-like). I can't say which one is more practical without going to the actual slopes to check the look and feel of the terrain and "feel" of the slopes. For technical accuracy, color coding is probably better (color-per-height), but slope-shaded maps are easier for regular folks to relate to.
Ray tracing is probably better when dealing with direct shadows and reflections, such as found with buildings. But for high level maps with mountains and rivers, I believe DBM is just fine if tuned right, and won't differ noticeably from comparable ray tracing.
To avoid misinterpretations, the large-scale shading should more or less mirror a semi-cloudy day such that direct shadows and reflections would play an extremely minor role anyhow.
I should point out there is an element of subjectivity here. Illustrations often exaggerate to emphasize various attributes of the illustration's target. Often there is a trade-off in terms of aiding the quick mental digestion of information versus "native" accuracy. Abstractions lie on purpose, or we wouldn't call them "abstractions".
The raycasting code in this article, on the other hand, seems to be missing the Lambert term! Just because a surface is not in shadow doesn't mean that it will appear at uniform brightness. If it is diffuse/matte, which seems like a reasonable starting point for most terrain, each light ray L should have a contribution to the brightness of the point proportional to L dot N, where N is the normal vector of the surface (and to find that you need, sorry, the gradient). In short, light almost tangent to the surface doesn't illuminate it much. Other kinds of surfaces will have different BRDFs (e.g. they can be shiny) but none of them will have the completely unshaded behavior shown. Of course, it's fine to do totally nonphysical things if it helps emphasize some aspect of the data, but it doesn't seem to be done knowingly and I think it produces a misleadingly flat look.
For the angle-to-brightness curve, one of my footnotes says:
"I believe his code only lacks a step that implements a minimum threshold on the slope before it gets colored--a step that would remove the problems seen in Figure 1 in flat areas."
With regards to misinterpretations:
"There's a tradeoff between realism and actual topographic information--real shadows can tell you about the relative height of two objects, but nothing about the slope of the land in the shadow itself."
(I also mention how you can adapt the rayshading technique for a local model that doesn't have that problem)
And later on in that paragraph, about subjectivity:
"Thirdly, cartography has an artistic and practical component, and it’s true what reliefshading.com said in the non-computational shade it threw at global models–sometimes, you need to tweak the direction of the light hitting a particular mountain or valley because it looks better."
3d rendering in games even today don't (in general) raytrace. While some things are raytraced in games /today/, getting performance is still achieved largely by leveraging the GPU, which in nowhere near as good at raytracing as general rastering. Essentially GPUs make raytracing faster simply because it is ridiculously parallel, and by limiting the kind of features that are supported (vs. general cpu raytracing)
And also "cheating" -- raytracing is ideally "correct", but for a game you don't need to be correct, you just need to be good enough (or rather, you need to be pretty, which isn't necessarily physically correct).
That said based on the content of the article this could be trivially improved with better data structures -- it's just walking a height map, but the article makes no mention of quad or kd trees which are iirc the standard go to for height map traversal.
It’s correct that it cheats (samples a few samples and blurs the results) instead of sampling adequately to get ”correct” results
SSAO doesn't sample a path across the depth buffer (because that wouldn't make any sense at all for shading), or consider light direction. This is very much my recollection from many many years ago, but basically my understanding is, but I recall that it's just a random sample of depth information surrounding a given fragment, and uses the depth of those samples relative to the current fragment as a scaling factor for the ambient contribution to to the final fragment light. The end result isn't even remotely close to correct, but again, correct isn't the same as being looking good enough (or even just looking good - in games you don't have totally control of how a scene may be seen by a player. Incorrect lighting can often be far superior to correct lighting from a gameplay pov.
There are many techniques relating to relief mapping that deal with ray marching through a depth map. Here is a pdf:
This technique would actually make it run very fast on a GPU, likely with better quality as well.
The term to google is "Parallax Occlusion Mapping," which was used for several extremely impressive (for the time) GPU demos in 2004.
I'm not sure if the author is familiar with ambient occlusion, but the method described here can be extended to generate ambient occlusion data which helps with a couple flaws. In particular:
- Although the shading is very nice (see Figure 8), the long shadow rays are distracting and are not very useful for cartography.
Ambient occlusion extends the idea presented here, except that instead of sending rays towards a relatively small light source, consider the light source to be the entire hemisphere of the sky, like on an overcast day. The result is wonderfully soft light that conveys form much like Figure 8, but without shadows.
Simply send out more rays to the entire sky hemisphere to create this effect.
My suggestion, A simple GIF would work fine. Or if you want to get fancy you can do interactive demos like on this blog: https://www.redblobgames.com/articles/visibility/