"Faces are sorted back-to-front in a very approximate way according to the Z centroid".
In certain cases, this can lead to showing the back face and hiding the front face because the centroid position is not sufficient for figuring out the Z order of triangles.
Other than this minor issue, it's a great implementation.
Maybe I should compare min positions rather than centroids, but I don't think that would fix those cases.
I think the only real fix involves splitting up the polygons for the failure cases, which seems complex.
Handling intersecting polygons, as you said, is a bit more complex, but for overlapping polygons it's possible to find the Z-order robustly by computing the intersections with the line of sight.
I recently ran into similar problems to be solved in Python, because I am trying to build a replacement for OpenSCAD by duct-taping together various Python packages. Shapely is quite useful for 2D geometry (like intersection and stuff). PyMesh is extremely useful for 3D meshes, but hard to build.
There are some non photorealistic rendering (NPR) options for Blender, many of them implemented in Python. I don't know if they can output SVG data currently.
ThreeJs has an SVG engine, meaning a backend that outputs SVG.
I am very interested in this.
Currently I am trying to reimplement the "construct" part with Pymesh and Shapely, but I haven't published it yet.
One of the problems for me is that I am sort of stumbling into this "CAD" thing from a coding perspective. CAD programs like Fusion 360° or FreeCAD seem a bit too complicated for me, and through my grappling with the concepts I am reinventing some concepts/tools.
Particularly I want to develop 3D printed objects the same way I would do data science: Start with Jupyter notebooks, iterate, maybe end up with parametric Python scripts.
It's not necessarily going to be as efficient (or rather, as hardware-acceleratable) as computing Z at each point and maintaining a Z-buffer, but that would assume a fixed resolution.
I ran the 4 shape example through SVGOMG and lowered the precision slider down until it didn't affect the perceived output.
The result was about 40% of the original size.
Side note -- if using React, svg2jsx is great for converting the SVG properties to React JSX names.
It was one of those moments where I knew someone out there MUST'VE made a tool for it, but it was a discoverability problem.
I don't think it was seen.io, but seen.io appears to do something similar to this post, in JS: http://seenjs.io/
Still a crazy neat tool but targeting edges as paths instead of polygons would save quite a few bites when hosting this stuff.
But when you know the mesh is low-poly, the technique is really cool. It should be trivial to port from Python to any other language, also from SVG to e.g. PDF.
I built this out to translate 3d models or signed distance fields to SVGs for pen plotting purposes. You can see an example here, generated from a few lines of distance field code: https://scontent-atl3-1.cdninstagram.com/vp/9212f4c15cf30583...
And another generated from an STL I generated with OpenSCAD (part of an Egg sculpture series I did for 3d printing): https://scontent-atl3-1.cdninstagram.com/vp/d3fdf208c687551a...
BTW I always think this website is something to do with being Out and Pride. That is my default parsing no matter how many times I've seen cool stuff on this website and the great stuff that Philip does.
I am interested at where it would be an advantage (if at all) to use SVG over OpenGL on the front end or server?
Even on mobiles, GPUs have much more computing power. They have architecture designed for transforming vertices and computing lightning, very fast and highly parallel.
In best case, SVG will take input triangles and submit them to GPU. Very small overhead (slightly more GPU bandwidth because lightning, when it’s that simple, bandwidth is often more expensive than compute). But in worst case, SVGs will rasterize on CPU, very slow in comparison.
If you rasterize on the server e.g. send jpeg images to browser, it becomes complicated, and depends on many factors like scene complexity and server hardware. Rasterizing triangles is much cheaper on GPU, but JPEG compressor needs data in system RAM not in VRAM, need to download back, relatively slow. Also on many public clouds virtual GPUs are expensive.
I would expect huge difference for similar operations. Current-gen $300 GPU consumes 120W doing 4.6 TFlops. Current-gen $300 CPU consumes 65 W doing 16 flop/cycle * 3.6 GHz * 8 cores = 0.4 GFlops. GPU cores don't spend electricity minimizing latency e.g. predicting branches, and they work at much lower frequencies.
Also some parts of the pipeline are implemented in hardware (rasterizer, texture units, alpha blending, Z buffer), they don't cost flops and hardware implementation is likely even more efficient than shader cores. E.g. if you want a gradient, load/compute per-vertex colors and the rasterizer will interpolate in hardware.