One of my favorite renderings of this is in this article [1], that shows the relationship between the Delaunay triangulation, Voronoi diagram, a relative neighborhood graph (RNG) and the euclidean minimum spanning tree (EMSP) of a set of 2d points.
I have a theory that the union of the edges of the RNG and EMSP could be used for automatic navigation between widgets in a GUI: combining the two, there's always between 1 and 4 edges coming out of every point, and so each of them could correspond to a direction key up/down/left/right according to some simple heuristic.
This is pure speculation (graphics algorithms and geometry aren't my strengths), but if you were to do this with a set of 3d points, would we get a relationship between delaunay triangulation (pyramidalization / convex hull?) a relative neighborhood graph, and a minimum spanning tree of 3d points, where combining the RNG and the MST would yield between 1 and 8 edges coming out? (at which point, we could use, say, the numpad for 3d gui widget navigation in the meta-verse in the same way you suggest using 4 cardinal directions for 2d gui representations)
Interesting question! By virtue of being a tree, the MST produces at most 3 edges coming out of any vertex, so this should be the same in 3D. The MST then adds (sometimes) a 4th edge, so, although you could build both graphs in 3D space, you would still end up with 4 edges coming out of any vertex, I think.
In 3D space the Delaunay triangulation would produce a bunch of irregular tetrahedra, so the edges coming out from every vertex would vary between a minimum of 3, and a maximum of 12, if I get it right (ref: [1] :-).
The 3D Voronoi cells are another story... I found some implementation that you can play with to see how it looks [2] [3], each cell is of a shape called "convex polytope". It feels like these cells are packed like each of the sub-cubes of a rubik, but I'm not 100% sure :-) ... if that's true, you could jump from each vertex to the next in at most 26 directions? (hand-waves :-p)
Indeed: the convex hull is the outer boundary of the Delaunay triangulation. In addition: if you "bow out" the points into 3D, the 3D convex hull is exactly the Delaunay triangulation. These two concepts are very closely related.
The process to create it is as simple as you might think, select points, draw the center lines for all pairs of close neighbour points and the corners will appear naturally. Tape all edges and paint the wall starting with almost white, mix more of the three base colours into for each of the cells.
I wrote a moderately popular Delaunay/Voronoi library for Unity a few years back [1] with a neat little demo video [2]. I implemented the incremental Bowyer-Watson algorithm for generating the triangulations, and then took the dual to generate the Voronoi tesselation (I also added a "clipper" that clips the voronoi diagram to a convex outline, which was fun, I haven't seen that anywhere else before and had to figure out how to do it myself).
Bowyer-Watson (which is described in this article) seems very simple to implement when you start: just start with a "big triangle" and then add points iteratively to it, and perform the flips you need to do. To do it performant, you have to build up a tree structure as you go, but it's not very tricky.
However: almost every description (including on Wikipedia) and implementation of Bowyer-Watson I could find was wrong. There's an incredibly subtle and hard to deal with issue with the algorithm, which is the initial "big triangle". Most people who implement it (and indeed I did the same initally) just make the triangle big enough to contain all the points, but that's not enough: it needs to be big enough to contain all the points in all the circumcircles of the triangles. These circumcircles can get ENORMOUS: in the limit of three points on a line, it's an infinite half-plane. Even if they aren't literally collinear, just close, the triangle becomes way to huge to deal with.
The answer is that you have to put the points "at infinity", which is a very weird concept. Basically, you have to have special rules for these points when doing comparisons, it's really tricky and very hard to get right.
If I were doing this again, I wouldn't use Bowyer-Watson, this subtlety is too tricky and hard to get right. Fortune's sweep-line is more complex on the surface, but that's the one I would go with. Or the 3D convex hull technique (which I also wrote a library for, by the way [3]).
I tried doing this by just assigning the points to [-inf, -inf, +inf, +inf, +inf, -inf], and I quickly ran into many, many NaNs, and was quite sad that it didn't work. It would be cool if floating point infinite numbers actually worked like you expect infinity to - although, perhaps that's just my shoddy understanding of infinity talking.
For discrete Voronoi, just render cones from the vertices, using OpenGL/WebGL. Encode the vertex ID in the color. The z-buffer does all the work. Read back the framebuffer.
Just ignore Big O analysis. I guarantee any kind of GPU will blow all CPU algorithms out of the water. This has been known for at least 30 years - I believe there was an SGI demo of it back in the day.
OpenGL can also do edge detection image processing (Laplacian convolution) and give you the line boundaries, if that's what you need. Perhaps a jittered double-rendering can give the same result.
If the resolution is not enough, just multiply the canvas size - x2, x4 ... it will still be faster.
I've implemented JFA successfully (in C) using nothing but the original paper but still haven't managed to get my head around Fortune. It's one of those things that's just crying out for someone to make a "This is how ..." tutorial / walkthrough kind of thing for.
The numerical robustness thing gave me a chuckle (rotate 1 radian and pray to the geometry gods), especially as I've been spending a good fraction of my time dealing with that in fancy path rendering and stroking.
One of the things I really want to work on in the next few months is a path intersection implementation that's robust by construction, backed up both by a convincing (if not formal) argument and tests crafted to shake out robustness issues. I have a bunch of ideas, largely motivated by the need to generalize well to curves - Shewchuk's work, cited elsethread, is impressive but I'm not smart enough to figure out how to make it work for arbitrary Béziers.
There's an issue[277] to track the work, and that has pointers to some of the ideas. If anyone is interested in working with me on this, please get in touch. If successful, I think it'll result in a nice code base and also likely a publishable paper.
Unfortunately, as you move into higher dimensions, these algorithms typically bog down.
Suppose that you have a point that is inside of the convex hull of the mesh that you want to use for triangulation (we‘re talking hyper-triangles here). What are the best points to choose for your triangulation? Since there are a lot of candidates for hyper-triangles you cannot possibly store the set of triangles beforehand.
I approached this problem using linear programming using the distance to the mesh points to find the best triangle. Not sure if this is the best approach though.
Happy to hear if someone knows of a better approach.
A new weather model NCAR is working on uses Voronoi meshes to divide the planet into cells. The Voronoi meshes allow nice transitions of higher resolution to lower resolution cells so you can model at higher resolutions in areas of interest and dont have the same problems the WRF and GFS have trying to divide a sphere into square cells
I would be interested in an explanation why it's O(nlog(n)).
It requires sorting the points along one axis which already gives O(nlog(n)) as a lower bound, but I'd be interested in how the line-sweeping would need to be done to not go over that.
The number of points in the beach front should roughly scale with the square root of points, so a naive search/replace per point insertion would take sqrt(n) operations and a O(n*sqrt(n)) overall runtime.
Just like many other sweep line based algorithms, you need to store the beach within an appropriate binary search tree that's log(n) per tree operation. And the total amount of tree queries should be proportional to the number of features (faces, edges, vertices) within Delaunay Triangulation. The fact that amount of features within triangulation is proportional to the number of vertices can be proved using Euler's formula F+V-E=2. V=n, E=F*3/2 (each triangle has 3 edges, but each edge is shared by 2 faces).
The complexities don't multiply here. The complexities would only
multiply if you had to sort for EACH of something. We're only doing a single sort.
> sorting the points along one axis which already gives O(nlog(n))
You're looking at it backwards, this is actually good news! It means that any subsequent work we do of similar or lower complexity doesn't change anything.
I never said that. I just mentioned that O(n log(n)) is the lower bound since sorting will already take that long.
What I'm concerned with is the complexity of the scanning step, which I think might be more than O(n log(n)) on its own (and therefore more than O(n log(n)) overall)
At least I'd like to see an explanation why it's <= O(n log(n))
Being closed really isn't an obstacle. For example you can solve a euclidean modulus toroid by just tiling the space once more around itself, running a vanilla euclidean solver and then unioning the 9 copies of the space.
If the pointset is degenerate (e.g. 4 points on a circle), then you may get different local results in different copies of the space and the unioning may be difficult.
I have a theory that the union of the edges of the RNG and EMSP could be used for automatic navigation between widgets in a GUI: combining the two, there's always between 1 and 4 edges coming out of every point, and so each of them could correspond to a direction key up/down/left/right according to some simple heuristic.
--
1: https://axltnnr.io/2018/Triangulation/