As a result, I decided to follow through and make more of it. So today I am announcing Primitive for macOS, now available on the Mac App Store.
The core is still implemented in Go. The GUI is written in Objective-C (haven't bothered to learn Swift yet!) and communicates with the Go process over stdin/stdout via NSTask and NSPipe. The rendering is done on the front-end; the backend just sends shape coordinates and such.
I spent a lot of time fleshing out the GUI layout to make it as simple yet powerful as possible. I hope you find it to be intuitive and I hope that you are pleased with the resulting images. I look forward to seeing what people do with the app. :)
Let me know if you have any questions!
Curious why you're using pipes to communicate when you can use CGO to make the go stuff it's own static C library and communicate bidirectionally that way? Granted it's a bit more complex than that sounds (I've did this to bring v.io to iOS), but I'd figure that'd still be easier than the alternative?
I don't know what the Go library looks like in terms of an API, but the pipes are already requiring serialization. This at least makes that hookup slightly easier, although CGO bridge has some overhead to it, so it wouldn't completely surprise me if a FIFO unix buffer was in fact faster especially if there's serialization in both situations.
Awesome, I see you've added new features too. There goes my evening. A suggestion for video, allow loading the shape data for the previous frame and use that as a starting point. Could bring you closer to real time processing :)
Need to use it in some paintings though
Just tried it out, and it is probably the coolest 'Show HN' I've ever seen. Spent the last 2 hours playing around with some pictures and different configs, and I absolutely love what this is doing.
Also, really appreciate that you provided a value add to the product sold, without compromising the open source core. Thank you for that.
Anyways, good job, and good luck. Hope this sells tons, cause it truly is amazing.
I have previously implemented a "compressed sensing" image system in Processing, this produces very similar results.
I was wondering whether you will be adding support for exporting animations, both of the image being built up and (as you mention) of many images matching criteria formed from different seeds.
Do you mean you just ship a compiled Go binary as part of your app and interact with that? If yes, did apple not say anything about it during review?
I saw that you can call Go in mobile apps . I'm wondering if that's also applicable for desktop apps
You need a constraint that guarantees for example that without any changes in the video, you won't see any changing primitives.
The site shows an example of what a still image looks like if rendered with the same parameters several times; it provides an interesting animated effect. I think that might produce an interesting animation style.
But for a more consistent style between frames, it'd help to have a way to seed the RNG consistently.
For example: what if you have a minor change in the area of the screen that got the first primitive (and hence used the first few RNG numbers)? Then all the remaining primitives will use different RNG numbers.
I think the problem becomes more difficult: you want to find the primitives that result in the minimal change wrt the previous frame.
(I will note that the website horizontally scrolls at 1024x768. A lot of sites do, though.)
https://github.com/dontpanic92/wxGo/issues/8 needs work though fyi
Here is the photo that kicked off my idea:
I also did another in the Spring that turned out nice:
Instead, the entire visual center of the image gets moved up by the Primitivization. It looks like trees on the top half get their exposed tops connected, but those same crowns are cropped in the reflection.
This is such a phenomenal project. Thanks for making it!
I have a crazy question: It generally seems to fill in detail uniformly, across the entire image. It's like the entire thing is coming slowly into focus. At some point, I'd prefer the algorithm to care a little less about filling in detail on the background, and focus more detail on the "items of interest" in a scene, the high-contrast regions or something. Perhaps even to have some manual control over it. Would this be possible? I am a total amateur when it comes to image analysis, btw, just thought it would be cool.
It would be possible to do that in the app if you could restrict new shapes to a specified region.
I do wish I could buy it from your site vs the App store... I've just had so many be experiences w/ having to 'transfer my license' away from App store purchases.
I made something similar years ago. The primary motivation was my need to print small resolution images in poster sizes when camera technology hadn't gotten to the point they had high enough resolution.
Mine used transparent images as the "shapes" and then it would "stamp" the shapes onto a larger image. It could use multiple shapes in a single image.
Here's an example.
Doesn't look nearly as nice, but maybe someone with more creatively than me could create shapes that make it much more interesting. Anyone have any interest in a library that does that?
to my embarrassment, i never got it completely right. the last instalment even provided statistics that could be used to compare the effectiveness of different strategies and parameters.
the last recent one featured:
* classic genetic algorithm (population size etc) with different methods for selection and crossover
* dumb hill climbing
* the "additive" method described in OP's app
* switching of methods at certain points (i.e. switch to hill climbing if the genetic algorithm didn't improve things for a certain time)
* bitmap and svg output
* statistics output that can be plotted with a plotting library so you can compare the effectiveness of different strategies
* multithreading incl. tests to find out the sweet spot for the number of concurrent threads
i don't know why but i never really reached the point where my results were of a comparable quality to roger alsings mona lisa or even alteredQualia's image evolver (http://alteredqualia.com/visualization/evolve/).
well, soon it's time for the next attempt.
one interesting realisation: my desktop computer is about 10x as fast as the raspberry pi 3 for this kind of CPU intensive work.
I sadly have no Mac to run it on - will you be porting to other OSes (I'm primarily Windows), or is this the same code as you shared on Github with a UI?
Edit: Go has good GIF support but doesn't really do dithering and stuff.
If you're able to do a follow up post in e.g. a month I'd enjoy reading!
Especially if you decide to sell directly I'd be curious how that fares vs the App Store.
Thanks for sharing here, nice to see this.
I'm a sucker for this kind of thing. Rotoscoping always fascinated me and I'm still looking for something that converts photos to pencil drawings.
One question: That seems like a really permissive license for a $9.99 app (I realize GitHub's just got the core). Are you worried about someone cloning it?
One request: I don't know if it's possible, but in iOS Tweetbot the bot's pictures show the unmodified picture first (requiring a swipe to see the bot's work). It feels to me like it would be better to show the primitive version first, but I realize this may not be feasible or universally preferred.
I'd also think that app store copycats try and pick easy, profitable projects. Just having the core implemented in Go would probably be a turn-off for many of them.
I really appreciate keeping the core FLOSS and hope that it won't cause problems for the author.
On a separate note @fogleman, would be able to talk a little about how you architected the app? I'm curious about how you've set up Objective-C communication with Go using NSTask and NSPipe.
Now some of you nerds need to tell me how I can animate the dash-array/dash-offset animation with 2000 paths without it being so slow! :D
Then make an open world game out if it.
Looks like a great tool to be honest, and the UI looks beautiful. Wish you the best, and keep up the awesome work!
Just an FYI, the links at the top link to the current page instead of sections.
Buying it now...
Here is the original concept, source included (2008): https://rogeralsing.com/2008/12/07/genetic-programming-evolu...
Even my genetic implementations were "slowish" both simple hill climbing (single parent) and two parent with cross-over/mutation. Though, after seeing this awesome implementation, I'm going to go do some work on improving mine.
Im blown away by fantastic effect from such a random algorithm, I was expecting something classic like edge detection and image segmentation, at least in the first stage of image creation.
> "Unique" means "one of a kind." Something can't be very unique, nor can it be extremely historic.
Conversely, love the lol TLD.
> if a word denotes the end-point of some scale (as unique surely does), then it can be used — and will be used — in describing approximations to that end-point, using approximative expressions like almost and nearly. (If there are only two occurrences of X in the world, then each of these is nearly unique.) Then, of course, you can ask how close to the end-point something is by asking how X it is, and you can describe something that has very few competitors for being the one and only as very X, and you can describe something that has no competitors at all as entirely X.
> Back up. Some of you are objecting that unique does not denote the end-point of a scale, and you say that because unique is not used in mathematics that way. But it's a mistake to suppose, when we're talking about ordinary language, that the mathematical usage of terms takes priority over ordinary people's usage of them. Yes, in mathematical usage, unique is used "crisply", for 'one and only one' (and that's an important concept to have in mathematical contexts), but, frankly, this really doesn't have beans to do with how unique is used in ordinary English. Instead, the mathematical usage is a specialization, a refinement, of ordinary English in a technical context.
 - https://en.wikipedia.org/wiki/Lenna#Controversy
Because a stock image is not standardized and has not been a standard for decades like Lenna has.
It's just about impossible to go back in history and change all the research papers and recompile all the algorithms and update them to a new image.
Of course we could still do it if enough people cared. But do we? is pornography morally bad? What about cropped pornography like lenna? Do the majority of people even think pornography is sexism? If it is sexism, then does that inherently make pornography bad? is all sexism bad? Is it bad that men are attracted to females/vis-versa and that sexually biased attraction exists? To me I would answer in the negative to all of these.
Before this thread I had no idea said controversy even existed. As far as I knew the Lenna image was just a pretty decent photo, not the result of cropping a playboy cover.
I guess the idea that a derivative artwork can be seen a unique artwork in its own right doesn't mean anything anymore...
Could you elaborate on this?
If you think that everyone in the porn industry wants to be there and loves their job then you're missing the wood for the trees...
It's a minor matter, but it's a shame you felt the need to remove the image when it's so useful and relevant.
I think the flower works well, but I might replace it yet again if I find an even better demo when I have more time to spend on it.
there's a reason why it's seen as a "standard test image", and not as some fodder for sexism. it's the above plus historical baggage.