
Evolving an image out of polygons - chaosmachine
http://screamingduck.com/Article.php?ArticleID=46&Show=ABCE
======
apu
A promising direction to go would be to generate gradients using the polygons
instead of colors directly and use Poisson blending [1,2] with gradient mixing
[e.g. 3] to generate the final image from that. It would lead to many fewer
noticeable artifacts at a similar compression ratio.

But I should point out that, while a fun hack, this is not really a viable
compression method in practice.

[1] Original paper:
[http://scholar.google.com/scholar?cluster=703822277846437904...](http://scholar.google.com/scholar?cluster=7038222778464379047&hl=en&as_sdt=1,48)

[2] Some easier-to-understand slides about it:
<http://www.cs.unc.edu/~lazebnik/research/fall08/jia_pan.pdf>

[3] Someone's project implementing it:
[http://www.cs.brown.edu/courses/csci1950-g/results/proj2/edw...](http://www.cs.brown.edu/courses/csci1950-g/results/proj2/edwallac/)

------
hartror
Related discussions as we've talked about this stuff before here:

* <http://news.ycombinator.com/item?id=392036>

* <http://news.ycombinator.com/item?id=389727>

* <http://news.ycombinator.com/item?id=503811>

* <http://news.ycombinator.com/item?id=450886>

(this is still an excellent post however)

------
laconian
Neat. After playing Out of this World a long time ago, I was wondering if it
was viable to compress video data using polygons annotated image block
overlays in the high-frequency/high-error areas. Looks like I might've been
onto something, since it seems like polys alone are pretty good at capturing
the image.

~~~
ekianjo
This has been done a very long time ago when the demo group Spaceballs
released "9 fingers" on Amiga, where a number of video sequences were
reproduced entirely in polygons. There were artifacts but the effect was
impressive at that time, when MPEG-1 decompression was impossible on such low-
power machines. You can check the effect on youtube. I do not know what tools
they used to compress the video into polygons, though.

~~~
egypturnash
"As far as I remember, we used a S-VHS camcorder and replayed the video with a
VHS-player capable of showing the video one frame at the time with a certain
degree of "stableness". Then we﻿ digitized the frame with a normal digitizer
(DigiView?). Custom software were then used to "vectorize" the images. But all
this is a little hazy. It's been "a few" years since I called myself Dark
Helmet..."

\- [one of the dudes who wrote 9
Fingers]([http://www.youtube.com/comment?lc=9Y2eZp3mPIlNEaQPjdPLHJ3xls...](http://www.youtube.com/comment?lc=9Y2eZp3mPIlNEaQPjdPLHJ3xlsWALEoKr39UZ3pPihQ))

I would not be surprised if the algorithm used to compress the video into
polygons was called "tracing it by hand". I'm pretty sure I've read that was
the algorithm used for their earlier video-focused demo "State of the Art".

~~~
ekianjo
Thanks for your answer - very interesting. You must have owned an Amiga if you
know this matter in so great details :)

I did not know that for "state of the art" they drew the vectors by hand.
Which is basically what Eric Chahi did for Another World back then, using
rotoscoping. I am assuming that the technique they used for "9 fingers" is
different though. They are WAY more vectors in 9 fingers than in State of the
Art, and that would take a huge amount of time to reproduce them on screen. I
am pretty sure they found a smarter way to do it (you can guess this from the
fact that the video reproduction in vectors is almost perfect in 9 fingers,
while in state of the art the animation is sometimes jerky and they use
shadows/blur/changing background to hide the imperfections).

------
jakubw
Would it make sense to include the error image in the format to make the
compression lossless? As I see it, if the final fitness is high enough, the
error image should be highly compressible?

~~~
zellyn
No. The error image just contains "the hard stuff".

~~~
moultano
That's not really true. The residuals of a good lossy compression algorithm
ought to look like uncorrelated noise with a smaller dynamic range than the
original image, and the second fact (smaller dynamic range) makes them
compressible.

~~~
DarkShikari
Dynamic range is not really relevant to compression. Entropy is much more
meaningful, and error images typically have a huge amount of entropy.

~~~
moultano
That doesn't make any sense. You've taken an image with arbitrary bytes, and
turned it into one where the bytes are tightly centered around a small range
of values.That's perfect for Golomb coding (for instance.)

~~~
jerf
No free lunches. For a lossless compressor, (Information theory bits of
polygons) + (Information theory bits of remaining error) >= (Information
theory bits of original image), where the > represents the possibility that
the first two elements aren't perfectly separated by your process or have
inescapable overlap.

I specify "information theory bits" because they aren't really what you see in
the computer's RAM; they're closer to "post-compression bits". But regardless,
no matter how you move the encoding around there is no escaping information
theory.

~~~
moultano
Obviously. That's the definition of compression. If the image is well modeled
by overlapping polygons + small residuals, the encoding will better approach
the uncomputable ideal, and thus, compress.

------
aerique
Why don't people date their blog posts? (or web pages in general)

~~~
nodata
People do. He didn't.

------
user24
Nice. Reminds me of a similar project which I've discussed here:
[http://www.puremango.co.uk/2011/03/genetic-algorithm-
example...](http://www.puremango.co.uk/2011/03/genetic-algorithm-examples/)

------
vectorjohn
I liked how when I read the article, the first picture looks like Hitler out
of the corner of my eyes.

