The idea is to find the field of view of an observer on a map with blocking buildings (polygons) and also to service a line of sight query, to know if a given point is visible to the observer.
The code contains enough comments including citations and references to articles and books. Here's the actual repo from where the above page is served:
Still, sounds like a fun time. I have my own version of this sitting around on a floppy disk somewhere, done at a point in time where floppy disks existed (possibly even 5 1/4" ones).
How much more complicated would it be if the obstacles were not polygons but points on a grid?
I've always been curious how the FoV was computed so fast in the Baldur's Gate games given the complex grid-based map and the weak harder at the time.
Example grid: http://aigamedev.com/static/tutorials/FPSB_bg_rsr.png
And the FoV rendered: http://cdn.wegotthiscovered.com/wp-content/uploads/Baldurs-G... (the rooms with no one in them are slightly darker)
Here's a 3D view-shed calculation on a grid.
To be honest, its not an obvious candidate for offloading the the GPU. Its a light task, the result is probably wanted by the CPU, and these days we probably have spare CPU cores that could be utilized while the GPU is busy.
However, with the move towards UMA for CPUs and GPUs - even the newly-announced ARM cores with Mali GPUs have coherent caches between CPU and GPU - and a move towards Vulkan, then the cost will change. As the cost of getting the result back to where its used for game logic diminishes, then this could increasingly be a candidate for GPU crunching :)
(In my code there's a divide-per-grid-square that could be tackled using scaling perhaps.)
> its not an obvious candidate for offloading the the GPU -- How? Shooting a ray to every vertex comes to my mind.
I'm not following your meaning here ;)
My blog post describes how to do it in a sweep rather than 'shooting lines', for the performance reasons given in the blog post.
'Shooting lines' is pretty poor for performance because of data locality and cache pressure.
In general, locality is the big performance problem with 'ray tracing' in general. All attempts at speeding up ray tracing are about trying to make 'bundles' of adjacent rays flying in close formation that can be combined or computed together so as to try and give some locality to the problem.
My scanline approach (line as in cache array of adjacent memory, not line as in line-of-sight between eye and obstacle) is good for CPUs and good for GPUs. If you want to offload viewshed computation to the GPU, you ought strive for a scanline approach rather than 'shooting a ray to every vertex' too. My code ought be straightforward to port to a shader.
When I had to do picking on a 3D terrain, I did a scanline approach too (https://bitbucket.org/rmsundaram/tryouts/src/master/CG/Terra...)
The "player" is sitting on the "wall" of a room, to the right of it there are stairs going "downwards", it's all just a heightmap.
Needless to say it's kinda slow, but I was surprised how neat it came out considering how little I put in. I'm sure someone who knows what they're doing (which isn't me) could do the same a lot faster and prettier, since I really did it in the most plump way possible.
I'm trying to build something that can be used across the DSM we have of SF for better planning our wireless links.
Shouldn't it be possible to then feed that into a fitness function and randomly "evolve" the "perfect" distribution, or is that silly?
And future configurations could also take into account how many things you'd have to move, or whatnot (I have no idea about what you're doing, I just imagine that's not something you do once and never change, right?).
I think with grids, one approach would be to go with a flood fill kind of an algorithm, but doing it cell-by-cell will be costly.
An alternative way would be to unify cells together as a polygon and do intersection testing; with a hammer everything looks like a nail :P
I think in games like Baldur's Gate, [Portal Visibility](http://playtechs.blogspot.in/2007/03/2d-portal-visibility-pa...) is commonly used.
See the References section.
* First render the scene in 3D, from the viewer's position, using a perspective transformation with the right field-of-view to match the "bounding angle".
* Save the resulting Z-buffer into a texture.
* When rendering the 2D, top-down scene, you can consult the Z-buffer (now a texture) to see whether that pixel should be highlighted or not.
I think this is a technique that's already used, but almost complementarily, to calculate shadows from dynamic lights.
This is a floating-point comparison problem, I faced it frequently during development; need to play more with it to arrive at a better epsilon ε value.
The debug view is beautiful. Great work.
Both the idea and the algorithms are decades old and were already used in the 80s and 90s computer games like Wolf3D and Doom.
And the discussion upthread is superb, so this was an excellent post for HN in every way.
But hey, I can't answer your question, since I cannot upvote on my own post, so I'm not the one calling it interesting ;)
OP was clever enough to figure it out, and that's great, but it bugs me some every time I see them say that they couldn't understand other people's explanations of how it works.
> it bugs me some every time I see them say that they couldn't understand other people's explanations of how it works.
I think I need to explain. Most of them explain their work well, but I cannot reuse them as they all have 360° FoV, not limited by distance, while I wanted something else, hence this mini project.
The ones which had what I wanted were closed source.
What's not ok is to grouse about it. If you know more than others, share some of what you know. Superciliousness in one's knowledge is a wrong spirit in which to comment here, and if you don't want to see HN degrade, that's the kind of thing you should help us cut back on.