(open the link and then click/drag the image)
The really cool thing someone suggested back in the day was, to have it try to match two different images from two different angles. But I never quite found the time to tackle that ;D
But if you rendered the data somewhere else, you wouldn't get the right effect unless you replicated the perspective/FOV that I used (which are completely ad-hoc because I had no idea what I was doing).
So currently there's no super-easy way to take the data somewhere else and do something useful with it.
(search "low poly" on /r/woodworking for a lot more)
DMesh is probably the most popular tool in this space that I've seen, but it's a very manual process (manually select vertexes and it fills the space with the average color).
Yours definitely gives a different look - a bit more aliased - which is interesting. Would be fun to try this one out too.
Also, if you need something less aliased, check out Triangle , a similar and awesome tool!
A common technique for painters is to establish areas of colour in the scene that are similar even though the eye perceives them differently. For example the shade beneath a tree - while complex in reality - can be easily represented with a single stroke of darker colour.
Zooming in to the SF example  you begin to see how this scene could be painted, which makes me think this tool could be a useful guide in painting tutorials.
Others have already commented on the utility / impressiveness of the tool itself, but on a completely different front: you also put effort into making a really nice UI. I clicked, fully expecting to fire up a terminal instance with a bunch of args or what have you. Instead I got a polished app that's a pleasure to use. I could realistically share this with my friends and family, not very common for a tool like this.
It works best if you're in a well-lit place.
But separately -- I've often wondered if the pixel grid is really the best way of storing compressed image data, and if something based on triangles couldn't be viable.
Just like progressive JPEG can render broad areas of color followed by filling in details, what if you used progressive layers of semitransparent colored triangle meshes to do the same? Or at least to form a base layer?
HEVC image encoding is already a big improvement over JPEG in that fixed square blocks are replaced by variable-sized blocks. But if we got rid of rectangular blocks and replaced them with flexible triangles?
It might be computationally prohibitive for encoding, and also you'd need to find a really clever way of representing triangles and colors in a minimal number of bits. But curious if anyone here knows whether something like that's ever been attempted.
Could be fun to experiment with animation... especially with something like the astronaut, if you could mask the subject vertices from the background's vertices and then apply a random jostle animation to the background vertices, might be a fun trippy effect.
In the thumbnail demo, the LQIP-custom approach (simple resize to low-res jpg thumbail+optimize jpg) approach preserves the more salient features better and has compression on-par-or-better than SQIP with lower processing times. So in my opinion the simple extreme resize+jpgoptim is preferable for thumbnails.
Thumbnails are only small part of LQIP story though and I can imagine RH12503/Triangula having much nicer results for larger images than fogleman/primitive.
OP should consider writing an axe312ger/sqip plugin.
I triangulated a 1988×1491 jpg using 10,000 points and managed to reduce the size to 20% of the original size, but the triangles could still obviously be seen.
It still would be cool to see this compared to those in Low Quality Image Placeholder implementations and find out if the extra work on nicer aethetics is preserved when the blur applied.
Did you do that manually? Or does the algorithm somehow detect lines like that?
The frames of the glasses were preserved because the algorithm "decided" that it was optimal to keep them with the limited number of points it has to work with.
I thought it was really interesting that a useful algorithm like this was created and possibly influenced by a natural process. I wonder if this repo uses the same type of algorithm?
Also because I'm pretty sure you're going to be the only one to see this post, do you have any feedback on the app?
What are your thoughts on Wails?
How is the learning curve for people not very familiar with Web technologies? On that subject, does it require any webdev tools to be installed (nodejs, frameworks, etc)?
However, Wails v1 uses mshtml (basically ie11) on Windows, so some features are unavailable.
Wails uses Webpack so you need npm installed when developing your app.
You might also be interested in Tauri  which is a similar framework but in Rust.
Firstly, a triangulation is made from the points and colors are chosen for each triangle.
Then, the variance between the triangles and the original image is calculated using Welford's online algorithm . The variance is computed by iterating over the pixels of each triangle and comparing the pixel color of the original image to the color of that triangle.
Lastly, the fitness is multiplied by a weight to ensure that it covers the entire canvas.
The source code may be a bit confusing because I've applied many optimizations which make it 10-20x faster.
This isn't my area of expertise at all, but intuitively it seems like if you treated the triangles as a graph, you could sample a minimal subgraph from an image and then search for other files that have a similar subgraph. It's conceptually like "geometric hash."
Is that what this technique is for, as it seems to have more applicaitons than just an image filter.
Graph canonization  on the other hand is not known to be either polynomial time solvable (I think it is for planar graphs though) or NP-complete and has implementations that are quite efficient in practice. It can be used to search for the exact same graph, but again extending to searching for similar graphs is probably quite difficult.
The graph idea could work, but the algorithm produces slightly different results each time it is ran. It still could be really interesting to try!
The compute on the indexing images would be relatively expensive, but the speed of a similarity search would be super fast, with the caveat that I don't know how image search is done today, and it probably should do something like this anyway.
And might be useful for svg placeholders before lazy-loading larger images.
Recreate an image as vegetables http://veganizer.veganblatt.com/
I recently got an axidraw and am going to see if I can adapt this into a clean set of lines for used with a mechanical plotter.
One of my side projects produces similar images: https://lowpolynator.com/
And you can limit the palette as postprocessing in any image editor. I can't see much reason why you'd want to build that into this tool.