Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: I built a tool to generate triangulated art from images (github.com/rh12503)
296 points by rh12503 6 months ago | hide | past | favorite | 69 comments

Neat! Reminds me that I once made something similar, except in 3D.


(open the link and then click/drag the image)

This is really cool! I'd be interested in seeing what it looks like to fully drag the image 180 degrees (but can't because the camera resets as soon as the mouseup event fires). Would you accept a small PR to the demo page which does this via a control button instead of on mouseup?

Thanks! Actually I just pushed a fix to make the drag work outside the canvas. No idea why it wasn't that way to begin with.

The really cool thing someone suggested back in the day was, to have it try to match two different images from two different angles. But I never quite found the time to tackle that ;D

This is amazing! Is there any way to download this in a 3D file format?

Thanks! The answer is: sort of, but not really. The maker page[1] has an "export data" button that writes out the vertex data (as a big series of RGB/XYZ values), which you could easily convert to whatever format.

But if you rendered the data somewhere else, you wouldn't get the right effect unless you replicated the perspective/FOV that I used (which are completely ad-hoc because I had no idea what I was doing).

So currently there's no super-easy way to take the data somewhere else and do something useful with it.

[1] https://andyhall.github.io/glsl-projectron/index.html

Very cool. Images like this are fairly popular in woodworking for low poly pet art:


(search "low poly" on /r/woodworking for a lot more)

DMesh is probably the most popular tool in this space that I've seen, but it's a very manual process (manually select vertexes and it fills the space with the average color).


Yours definitely gives a different look - a bit more aliased - which is interesting. Would be fun to try this one out too.

For low-poly art, I wonder if there could be a way to only triangulate a specific area. (eg. only the pet and not the background)

Also, if you need something less aliased, check out Triangle [1], a similar and awesome tool!

[1] https://github.com/esimov/triangle

This is great, especially the SF example as it reminds me of Brian Lotti's paintings [1]

A common technique for painters is to establish areas of colour in the scene that are similar even though the eye perceives them differently. For example the shade beneath a tree - while complex in reality - can be easily represented with a single stroke of darker colour.

Zooming in to the SF example [2] you begin to see how this scene could be painted, which makes me think this tool could be a useful guide in painting tutorials.

[1] https://www.booooooom.com/2015/04/08/artist-profile-painter-... [2] https://github.com/RH12503/triangula/blob/main/assets/output...

This looks cool. It reminds me a lot of Primitive Pictures[0], coincidentally also written in Go, which supports other shapes besides triangles as well.

[0]: https://github.com/fogleman/primitive

Thanks! One main difference from Primitive is that Primitive's shapes can overlap, while Triangula generates a triangulation.

IMO both are cool but your method is way cooler!

Also check out my web app Boxy SVG [1] which integrates both vectorization algorithms as "Triangulate" and "Primitivize" generators [2].

[1] https://boxy-svg.com/app

[2] https://boxy-svg.com/blog/9

See also: SQIP, which uses Primitive: https://github.com/axe312ger/sqip

I'm not familiar with Go, but is there anything in the language which makes it a particularly good fit for this type of problem?

Not particularly, but Go is fast, compiles to small binaries, and supports multithreading with coroutines which works well for genetic algorithms.

Similar project worth checking out is Geometrize [1] also available on GitHub [2]

[1] https://www.geometrize.co.uk/

[2] https://github.com/Tw1ddle/geometrize

I really like this.

Others have already commented on the utility / impressiveness of the tool itself, but on a completely different front: you also put effort into making a really nice UI. I clicked, fully expecting to fire up a terminal instance with a bunch of args or what have you. Instead I got a polished app that's a pleasure to use. I could realistically share this with my friends and family, not very common for a tool like this.

I made something similar that runs in the browser using your webcam, mostly to visualise tracking keypoints.

It works best if you're in a well-lit place.

* https://github.com/Jonty/triangulator

* https://jonty.github.io/triangulator/

Cool! I'm guessing you use an edge detection algorithm like https://github.com/esimov/triangle?

Actually I use FAST keypoint detection, which is used to repeatably find "good" points for tracking using computer vision.



This is very, very cool. It would also be amazing as a Photoshop filter.

But separately -- I've often wondered if the pixel grid is really the best way of storing compressed image data, and if something based on triangles couldn't be viable.

Just like progressive JPEG can render broad areas of color followed by filling in details, what if you used progressive layers of semitransparent colored triangle meshes to do the same? Or at least to form a base layer?

HEVC image encoding is already a big improvement over JPEG in that fixed square blocks are replaced by variable-sized blocks. But if we got rid of rectangular blocks and replaced them with flexible triangles?

It might be computationally prohibitive for encoding, and also you'd need to find a really clever way of representing triangles and colors in a minimal number of bits. But curious if anyone here knows whether something like that's ever been attempted.

I remember seeing an experimental compression algorithm that made everything look like a painting, but I'm not sure how to find it again. It seems likely there have been other attempts at compression through vectorization.

Instead of outputting an image, can you output the points and triangles, eg as an SVG?

Could be fun to experiment with animation... especially with something like the astronaut, if you could mask the subject vertices from the background's vertices and then apply a random jostle animation to the background vertices, might be a fun trippy effect.

There is a SVG output option in the tool

Thats pretty sweet shading. Love how it captures the sunlight reflecting on the buildings in the SF example without just averaging out those colors and making it look bland.

To what extent does this reduce image file size? Could I use this as an alternative to dithering for lazily loading large images on a website, for example?

axe312ger/SQIP[1] does this effectively in low-quality image loading relying on the before-mentioned fogleman/primitive library. I think OP's project would be great to add as a new entry to the SQIP demo site [2].

In the thumbnail demo, the LQIP-custom approach (simple resize to low-res jpg thumbail+optimize jpg) approach preserves the more salient features better and has compression on-par-or-better than SQIP with lower processing times. So in my opinion the simple extreme resize+jpgoptim is preferable for thumbnails.

Thumbnails are only small part of LQIP story though and I can imagine RH12503/Triangula having much nicer results for larger images than fogleman/primitive. OP should consider writing an axe312ger/sqip plugin.

1. https://github.com/axe312ger/sqip

2. https://axe312ger.github.io/sqip/

That's an interesting idea! I suppose it could be used for compression, although my intentions were for this to be a generative art project.

I triangulated a 1988×1491 jpg using 10,000 points and managed to reduce the size to 20% of the original size, but the triangles could still obviously be seen.

You could probably make it into an SVG file to reduce the size even more.

Love this idea. A novel vectorization.

Adding my voice to this. It would be an amazing tool for web work -- the vectorized images look far better than the very small jpeg images used in lazy loading today.

Similar functionality is already implemented in Boxy SVG editor [1]. There is also another more powerful "Primitivize" generator which has more options, e.g. you can choose whether the vectorized image should consist from triangles or rectangles.

[1] https://i.imgur.com/I8Fq1bM.gif

And could inline, so that the preview image is loaded with the page itself.

It sure looks better aestethically esimov/triangle or fogleman/primitive! Goal achieved I would say.

It still would be cool to see this compared to those in Low Quality Image Placeholder implementations and find out if the extra work on nicer aethetics is preserved when the blur applied.

Blurring is a good idea! It could work with placeholder images, but it'll probably be better with smaller ones (eg. profile pictures).

I'm impressed it manages to preserve the frames of the man's glasses.

Did you do that manually? Or does the algorithm somehow detect lines like that?

Thanks! The algorithm works iteratively, so essentially each iteration it makes small changes and then checks if these changes are optimal. (There are many more details described on the wiki [1])

The frames of the glasses were preserved because the algorithm "decided" that it was optimal to keep them with the limited number of points it has to work with.

[1] https://github.com/RH12503/Triangula/wiki/Explanation-of-the...

This reminds me of a project I found by Antirez (the creator of Redis) recently called shapeme: https://github.com/antirez/shapeme - which uses simulated annealing: https://en.wikipedia.org/wiki/Simulated_annealing to approximate an image using triangles. It took me down a really interesting rabbit hole: "The name of the algorithm comes from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects."

I thought it was really interesting that a useful algorithm like this was created and possibly influenced by a natural process. I wonder if this repo uses the same type of algorithm?

Triangula uses a modified genetic algorithm which I wrote about on the wiki here [1]. The reason I didn't use simulated annealing is because I ran some tests and found it was generally less effective for my case.

[1] https://github.com/RH12503/Triangula/wiki/Explanation-of-the...

Do you have a specific initialization scheme? The example gif goes very fast but it seems to find very fit individuals in the first generation which seems too good to be random.

Actually, the initialization is completely random. The reason why it might seem too optimal is because the algorithm chooses the colors most similar to the image when rendering the triangles.

Somewhat of a tangent: this project's GUI was built using Wails [1]. According to their docs, "The traditional method of providing web interfaces to Go programs is via a built-in web server." I'm not a Go developer so I'm hoping someone more familiar can comment on this. Are all Go GUIs basically Electron-lite apps?

[1] https://wails.app/

Wails v1 (which this project uses) uses Webview for rendering it's frontend. You can read more about how Wails works here: https://wails.app/about/concepts/

Also because I'm pretty sure you're going to be the only one to see this post, do you have any feedback on the app?

I have been in search of a good GUI library for go that works well on both Linux and Windows.

What are your thoughts on Wails?

How is the learning curve for people not very familiar with Web technologies? On that subject, does it require any webdev tools to be installed (nodejs, frameworks, etc)?

I found Wails very simple to use, and would highly recommend it to anyone looking for a lightweight GUI library.

However, Wails v1 uses mshtml (basically ie11) on Windows, so some features are unavailable.

Wails uses Webpack so you need npm installed when developing your app.

You might also be interested in Tauri [1] which is a similar framework but in Rust.

[1] https://github.com/tauri-apps/tauri

Apart of very cool visual effect, the app got my attention as an interesting example of lightweight GUI use with Go. Another rabbit hole for today. Thanks!

My main concern is why the choice of a (slow) genetic algorithm vs a more traditional Delaunay triangulation algorithm?

I'm a bit unsure of what you mean, but the genetic algorithm is used to find an optimal set of points, and then a Delaunay triangulation is created from those points.

Ah, I must've misread the wiki. I thought it meant you used genetic algorithm for triangulation instead of point selection. What's your criteria for calculating the fitness for a candidate set of points?


I was just thinking about adding another wiki page for that, but I'll give a brief explanation here:

Firstly, a triangulation is made from the points and colors are chosen for each triangle.

Then, the variance between the triangles and the original image is calculated using Welford's online algorithm [1]. The variance is computed by iterating over the pixels of each triangle and comparing the pixel color of the original image to the color of that triangle.

Lastly, the fitness is multiplied by a weight to ensure that it covers the entire canvas.

The source code may be a bit confusing because I've applied many optimizations which make it 10-20x faster.

[1] https://en.wikipedia.org/wiki/Algorithms_for_calculating_var...

Could this technique be used to index images for search? e.g. intuitively, the triangles and their layout seem to preseve more unique information about the image than a grid would, kind of like Vornoi partitions, or an entropy preserving sample.

This isn't my area of expertise at all, but intuitively it seems like if you treated the triangles as a graph, you could sample a minimal subgraph from an image and then search for other files that have a similar subgraph. It's conceptually like "geometric hash."

Is that what this technique is for, as it seems to have more applicaitons than just an image filter.

Note that the subgraph isomorphism problem [1] is NP-complete (also for planar graphs). There are very efficient data structures like k-d trees [2] that can be used to efficiently search for points in euclidean spaces, doing something similar for graphs would be quite difficult.

[1]: https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem

[2]: https://en.wikipedia.org/wiki/K-d_tree


Graph canonization [3] on the other hand is not known to be either polynomial time solvable (I think it is for planar graphs though) or NP-complete and has implementations that are quite efficient in practice. It can be used to search for the exact same graph, but again extending to searching for similar graphs is probably quite difficult.

[3]: https://en.wikipedia.org/wiki/Graph_canonization

I originally intended this to be a generative art side project, but there's been a magnitude of great ideas commented here!

The graph idea could work, but the algorithm produces slightly different results each time it is ran. It still could be really interesting to try!

Was thinking that the adjacency matrix that describes the graph you extract from the image will produce a string you can chunk and do a fast search on, and then it's a standard similarity/distance string search in Redis, or a probabilistic filter.

The compute on the indexing images would be relatively expensive, but the speed of a similarity search would be super fast, with the caveat that I don't know how image search is done today, and it probably should do something like this anyway.

Looks cool!

And might be useful for svg placeholders before lazy-loading larger images.

Old but still working

Recreate an image as vegetables http://veganizer.veganblatt.com/

Reminds me of the techniques Spaceballs used to make the mind blowing "9 fingers" demo for Amiga in 1993.



This is awesome

I recently got an axidraw and am going to see if I can adapt this into a clean set of lines for used with a mechanical plotter.

A friend of mine and I intend to do the same, for which I came up with this: https://github.com/matze/inkdrop

That sounds super cool! I'd be interested in seeing a picture if you end up doing it. Let me know if you need any help with the API!

Nice idea! I’ve been making low poly images manually for vinyl cutter projects with the kids. I might try the same!

Cool. I can see this being used to determine perception thresholds. E.g., by testing priming of progressively more detailed pictures, or by blurring the triangulated pictures (triangulate a picture a bit, but not too much, then blur it in another tool, and you can make out the main shapes better; sometimes even read the text).

Love it. There's a real use in generating cartographic representations of aerial imagery there I think.

Thanks for sharing! Interesting to see how the algorithm preserves thin dark lines like in the glasses or the face of the clown.

One of my side projects produces similar images: https://lowpolynator.com/

Similar thing happening in this video here https://www.youtube.com/watch?v=Try2NGwdIt0 "Evolutionary Algorithm draws Breath of the Wild"

I wonder if you can constrain the problem a bit more: say, limit the number of triangles that can be used; limit the color palette to a small set of colors. It would make for interesting results.

You can already limit the number of triangles (points) used.

And you can limit the palette as postprocessing in any image editor. I can't see much reason why you'd want to build that into this tool.

Well done! This looks amazing :) I like the way, you can adjust parameters for output image.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact