As a result, I decided to follow through and make more of it. So today I am announcing Primitive for macOS, now available on the Mac App Store.
The core is still implemented in Go. The GUI is written in Objective-C (haven't bothered to learn Swift yet!) and communicates with the Go process over stdin/stdout via NSTask and NSPipe. The rendering is done on the front-end; the backend just sends shape coordinates and such.
I spent a lot of time fleshing out the GUI layout to make it as simple yet powerful as possible. I hope you find it to be intuitive and I hope that you are pleased with the resulting images. I look forward to seeing what people do with the app. :)
Curious why you're using pipes to communicate when you can use CGO to make the go stuff it's own static C library and communicate bidirectionally that way? Granted it's a bit more complex than that sounds (I've did this to bring v.io to iOS), but I'd figure that'd still be easier than the alternative?
Or even better, actual bools and strings like a real API.
I don't know what the Go library looks like in terms of an API, but the pipes are already requiring serialization. This at least makes that hookup slightly easier, although CGO bridge has some overhead to it, so it wouldn't completely surprise me if a FIFO unix buffer was in fact faster especially if there's serialization in both situations.
Awesome, I see you've added new features too. There goes my evening. A suggestion for video, allow loading the shape data for the previous frame and use that as a starting point. Could bring you closer to real time processing :)
I just put a complex image with shadows/light sources and it did an amazing job. It was fun to watch and eventually stop. It was even cooler to open in sketch and see all the shapes and the way the image was formed. A++
I missed the original post on the primitive core, cause I hadn't seen this.
Just tried it out, and it is probably the coolest 'Show HN' I've ever seen. Spent the last 2 hours playing around with some pictures and different configs, and I absolutely love what this is doing.
Also, really appreciate that you provided a value add to the product sold, without compromising the open source core. Thank you for that.
Anyways, good job, and good luck. Hope this sells tons, cause it truly is amazing.
I remember seeing the lib a few months ago and thought it was neat. I even went to find the repo and call shenanigans on this post before I looked at the comments. Really cool to see you following up with it.
I downloaded your app from the MAS and find it beautiful.
I have previously implemented a "compressed sensing" image system in Processing, this produces very similar results.
I was wondering whether you will be adding support for exporting animations, both of the image being built up and (as you mention) of many images matching criteria formed from different seeds.
Any chance that we can get the core for other OSs? Sounds like it's where all the magic happens and others could quickly work on UIs for it on other platforms.
I bet it would look really cool to have 3D primitives that can be oriented with respect to the time dimension. So for example, a spherical primitive would be a circle that grows and shrinks as frames progress.
> Well, a problem could be that you get a lot of noise if all the frames are rendered independently.
The site shows an example of what a still image looks like if rendered with the same parameters several times; it provides an interesting animated effect. I think that might produce an interesting animation style.
But for a more consistent style between frames, it'd help to have a way to seed the RNG consistently.
I think there is more to it than just seeding the RNG consistently.
For example: what if you have a minor change in the area of the screen that got the first primitive (and hence used the first few RNG numbers)? Then all the remaining primitives will use different RNG numbers.
I think the problem becomes more difficult: you want to find the primitives that result in the minimal change wrt the previous frame.
I'm surprised I couldn't find any SVG versions of sample images, as (I assume) for many interested parties, the vector format output is the unique selling proposition for a, well, vector-based application.
Two comments:
1. The original photos are beautiful. What equipment are you using? We're extremely spoiled here in the PNW!
2. Have you considered printing any of the Primitive forms out and displaying them?
Thanks! Equipment on all four were taken with a Sony Alpha A7s with 35mm lens. I haven't thought about printing any of the Primitive images, but mostly because I only downloaded it about 20 minutes before posting these images. They might look interesting on canvas, tempting!
Too late to edit my comment, but I added a couple more photos if you want to check them out. A night scene that I thought would be a tough test (hard to do stars!) and some columnar basalt that turned out great.
Thanks, it was an early season night (Milky Way is only visible in the Summer here) and it started cloudy. I stuck it out and got lucky. That really bright point above the mountain is Mars, pretty neat!
It's just a simple jQuery plug-in called Twenty Twenty that I found when I had the idea to take the same picture at different times and compare the result. Drag/swipe across ended up working better as an effect than my original fade-in idea. You can view source to see how it works, it's really simple.
Playing with that comparison, it looks like there's a +y bias in your algorithm. If I didn't say that in a way that's clear, check out the Mowich Lake picture in particular. Its features are mirrored above and below the shoreline, so the generated features should also be somewhat mirrored, right?
Instead, the entire visual center of the image gets moved up by the Primitivization. It looks like trees on the top half get their exposed tops connected, but those same crowns are cropped in the reflection.
This is such a phenomenal project. Thanks for making it!
I have let it run for ~40k shapes on a very hi-res photograph of my adorable children in front of a stream, and it is a beautiful thing to watch the resolution unfold. I love that I end up with a vector graphic at the end.
I have a crazy question: It generally seems to fill in detail uniformly, across the entire image. It's like the entire thing is coming slowly into focus. At some point, I'd prefer the algorithm to care a little less about filling in detail on the background, and focus more detail on the "items of interest" in a scene, the high-contrast regions or something. Perhaps even to have some manual control over it. Would this be possible? I am a total amateur when it comes to image analysis, btw, just thought it would be cool.
That would be a great feature. You could spend 500 shapes on the background and 500 on the foreground, instead of needing 3000 to get high detail everywhere.
It would be possible to do that in the app if you could restrict new shapes to a specified region.
Cool, I'm looking forward to giving this a try... I've always been a big fan of geometric art.
I do wish I could buy it from your site vs the App store... I've just had so many be experiences w/ having to 'transfer my license' away from App store purchases.
I made something similar years ago. The primary motivation was my need to print small resolution images in poster sizes when camera technology hadn't gotten to the point they had high enough resolution.
Mine used transparent images as the "shapes" and then it would "stamp" the shapes onto a larger image. It could use multiple shapes in a single image.
Doesn't look nearly as nice, but maybe someone with more creatively than me could create shapes that make it much more interesting. Anyone have any interest in a library that does that?
Seen that before. Cool from the evolutionary algorithm standpoint, but Primitive produces nicer looking results in mere seconds. :) But we both were inspired by the same original Mona Lisa project.
Shameless plug: similar artistic effects can be achieved with ImageTracer, however the algorithm is different. It's a Public Domain tracing and vectorizing library, outputting SVG.
i write an app like this about once a year (usually a java cmdline implementation).
to my embarrassment, i never got it completely right. the last instalment even provided statistics that could be used to compare the effectiveness of different strategies and parameters.
the last recent one featured:
* classic genetic algorithm (population size etc) with different methods for selection and crossover
* dumb hill climbing
* the "additive" method described in OP's app
* switching of methods at certain points (i.e. switch to hill climbing if the genetic algorithm didn't improve things for a certain time)
* bitmap and svg output
* statistics output that can be plotted with a plotting library so you can compare the effectiveness of different strategies
* multithreading incl. tests to find out the sweet spot for the number of concurrent threads
~~~
i don't know why but i never really reached the point where my results were of a comparable quality to roger alsings mona lisa or even alteredQualia's image evolver (http://alteredqualia.com/visualization/evolve/).
well, soon it's time for the next attempt.
one interesting realisation: my desktop computer is about 10x as fast as the raspberry pi 3 for this kind of CPU intensive work.
FYI - My wife, who does collaging, just got Primitive off MAS, and she loves it. She's running it on an older iMac. She's impatient, and will probably go buy a MacPro in the morning. Thanks - I think...
This is really cool - just bought it. I'd like a good way to get photos from the Photos app into it, either by sharing them or by directly accessing the photo roll from within Primitive.
I sadly have no Mac to run it on - will you be porting to other OSes (I'm primarily Windows), or is this the same code as you shared on Github with a UI?
+1. Running linux here. It would be really cool if I could test this out.
Edit: Found that the core is on github (https://github.com/fogleman/primitive). All you need is go to run it.
Guess until then imagemagick + command line primitive can do the business (convert video to a series of images w/ IM, process with primitive, recreate the video from processed images w/ IM).
Animated raster images depicting a sequence of overlaid primitive vector shapes for a single input? Or just iterating the existing algorithm over a video input?
Cool project, best of luck. Off topic question: are you planning to focus more in MAS, do you think it's still viable to make a living off it solo? (yes, depends on COL and needs, still interested in your perspective)
If you're able to do a follow up post in e.g. a month I'd enjoy reading!
Especially if you decide to sell directly I'd be curious how that fares vs the App Store.
One question: That seems like a really permissive license for a $9.99 app (I realize GitHub's just got the core). Are you worried about someone cloning it?
One request: I don't know if it's possible, but in iOS Tweetbot the bot's pictures show the unmodified picture first (requiring a swipe to see the bot's work). It feels to me like it would be better to show the primitive version first, but I realize this may not be feasible or universally preferred.
I think the majority of work here has been put into designing and implementing the UI. A friend did something like the core a while back, in a couple of days. I'm sure many others have and many others will.
I'd also think that app store copycats try and pick easy, profitable projects. Just having the core implemented in Go would probably be a turn-off for many of them.
I really appreciate keeping the core FLOSS and hope that it won't cause problems for the author.
I just purchased! I'm not even sure what my use will be, but your work is both mathematically interesting and aesthetically beautiful, and I wanted to support it. :)
This app looks fantastic! I'm impressed by the various photos posted on Primitive's twitter account: https://twitter.com/PrimitivePic
On a separate note @fogleman, would be able to talk a little about how you architected the app? I'm curious about how you've set up Objective-C communication with Go using NSTask and NSPipe.
Oh wow, I'm impressed with the results you were able to obtain! I was able to get similar results on my "genetic draw", but it looks like I have some optimizing to do :)
https://github.com/kennycason/genetic_draw
As a designer, this is what I've waited for for a very long time. I've already created tons of SVG photos and have started animating them in css. Super cool effect!
Now some of you nerds need to tell me how I can animate the dash-array/dash-offset animation with 2000 paths without it being so slow! :D
We are totally loving this at home. The kids love walking away as far as possible and seeing how the details seems to come out. I have a picture here where the boat on the water is a little triangle yet at a distance you would swear the boat is very detailed. Money so well spent.
This is very cool. I had a nearly identical idea a few months ago (and somehow missed your post the first time around) based around feature detection in images, and this is way cooler than what I had envisioned.
I don't know how the algorithm works, but I noticed that already figures with a very rough level of approximation are still almost unreckognizable from the original when I defocus my sight.
If the algorithm could work well on a mobile device, this would make awesome camera filters for something like Instagram, likely could make some $ or get someone to buy it from you.
Nice, but the problem with this kind of effect is that when you zoom out enough, you lose the effect and the original photo remains. (Just try it in your browser on the demo page, by e.g. pressing ctrl minus).
I did a simulated annealing version 5 or 6 years ago (so I could have a neat Christmas card), and was surprisingly slow (well, I had a slower computer, too). Congrats on making it fast and nice (and exporting in vectors, which is something I didn't bother to even try!)
I found similar results in my simulations as well.
Even my genetic implementations were "slowish" both simple hill climbing (single parent) and two parent with cross-over/mutation. Though, after seeing this awesome implementation, I'm going to go do some work on improving mine.
Another World used that technique as means of compression, but it was done manually by the creator (and almost made him go crazy, tons of work). I wonder if this could be used as basis for modern image compressor.
Im blown away by fantastic effect from such a random algorithm, I was expecting something classic like edge detection and image segmentation, at least in the first stage of image creation.
> if a word denotes the end-point of some scale (as unique surely does), then it can be used — and will be used — in describing approximations to that end-point, using approximative expressions like almost and nearly. (If there are only two occurrences of X in the world, then each of these is nearly unique.) Then, of course, you can ask how close to the end-point something is by asking how X it is, and you can describe something that has very few competitors for being the one and only as very X, and you can describe something that has no competitors at all as entirely X.
> Back up. Some of you are objecting that unique does not denote the end-point of a scale, and you say that because unique is not used in mathematics that way. But it's a mistake to suppose, when we're talking about ordinary language, that the mathematical usage of terms takes priority over ordinary people's usage of them. Yes, in mathematical usage, unique is used "crisply", for 'one and only one' (and that's an important concept to have in mathematical contexts), but, frankly, this really doesn't have beans to do with how unique is used in ordinary English. Instead, the mathematical usage is a specialization, a refinement, of ordinary English in a technical context.
Funny. I'd managed to read "president" as "professor," googled it, and found a fairly well-known physicist, which made my second quoted paragraph seem especially relevant. https://en.wikipedia.org/wiki/Albert_Allen_Bartlett
The Lenna image that is drawn is originally from a nude image in Playboy [1]. It causes me to associate your work with sexism and pornography. I'm sure not everyone makes this association, but why not use a free stock image or a great work of art for the demo?
>why not use a free stock image or a great work of art for the demo
Because a stock image is not standardized and has not been a standard for decades like Lenna has.
It's just about impossible to go back in history and change all the research papers and recompile all the algorithms and update them to a new image.
Of course we could still do it if enough people cared. But do we? is pornography morally bad? What about cropped pornography like lenna? Do the majority of people even think pornography is sexism? If it is sexism, then does that inherently make pornography bad? is all sexism bad? Is it bad that men are attracted to females/vis-versa and that sexually biased attraction exists? To me I would answer in the negative to all of these.
While I personally feel like porn is dubious (primarily due to the exploitative nature of the industry) I feel like this whole controversy is way overblown.
Before this thread I had no idea said controversy even existed. As far as I knew the Lenna image was just a pretty decent photo, not the result of cropping a playboy cover.
I guess the idea that a derivative artwork can be seen a unique artwork in its own right doesn't mean anything anymore...
Someone else mentioned this and I agreed, so I already replaced it before seeing this comment. The Lenna image did make a good demo, but so do other images.
I find faces a little problematic actually. Since facial recognition is such a huge part of our brain, it always seems like Primitive is ignoring the face too much and spending time on extraneous details. (I've previously experimented with weighting some areas of an image more than others in the algorithm because of this.)
I think the flower works well, but I might replace it yet again if I find an even better demo when I have more time to spend on it.
Upvoted (as with all your other posts here) because you made a perfectly valid criticism that the program author agreed with and adopted, but the geek boys got typically defensive.
I don't think you're going to have any effect simply due to the 1. complexity of the image, as it's a real photograph, and 2. the complexity of colors and hues in the image.
there's a reason why it's seen as a "standard test image", and not as some fodder for sexism. it's the above plus historical baggage.
As a result, I decided to follow through and make more of it. So today I am announcing Primitive for macOS, now available on the Mac App Store.
The core is still implemented in Go. The GUI is written in Objective-C (haven't bothered to learn Swift yet!) and communicates with the Go process over stdin/stdout via NSTask and NSPipe. The rendering is done on the front-end; the backend just sends shape coordinates and such.
I spent a lot of time fleshing out the GUI layout to make it as simple yet powerful as possible. I hope you find it to be intuitive and I hope that you are pleased with the resulting images. I look forward to seeing what people do with the app. :)
Let me know if you have any questions!