
The Art of PNG Glitch - erikschoster
http://ucnv.github.io/pnglitch/
======
mrspeaker
I was always interested in glitching images - but was frustrated with the
checksum/read errors. I took a slightly different approach with JPGCrunk
([http://www.mrspeaker.net/dev/jpgcrunk/](http://www.mrspeaker.net/dev/jpgcrunk/)):
instead of modifying the image then trying to display it, randomly mess with
the internals of the encoder (I used a JavaScript implementation so it was
easy to modify
[https://github.com/mrspeaker/jpgcrunk/blob/master/scripts/en...](https://github.com/mrspeaker/jpgcrunk/blob/master/scripts/enc.js#L548))

This way the "glitching" happens inside the encoder algorithm, and then
there's nothing to have to repair!

~~~
nosuchthing
Whoa, cool.

Reminds me a bit of a video codec technique to exaggerate the motion frames
and compression ala "datamoshing".

Keyframes are removed which leaves only information from the previous frame to
be updated by motion frames. Than update/motion frames are reiterated
sequentially which creates an effect that just further saturates the previous
frame and looks like a fluid melting effect.

in use:
[https://www.youtube.com/watch?v=mvqakws0CeU](https://www.youtube.com/watch?v=mvqakws0CeU)

how to via ffpmeg:
[https://www.youtube.com/watch?v=tYytVzbPky8](https://www.youtube.com/watch?v=tYytVzbPky8)

~~~
Vexs
Gah, that just makes me unconformable on so many levels. It's not helped by
the content of the video either...

------
lbebber
I made an animated PNG glitching demo a while ago.
[http://codepen.io/lbebber/pen/EjVPao](http://codepen.io/lbebber/pen/EjVPao)

The approach is simple: just mess with the base64 string.

------
pavel_lishin
Chrome seems to load the examples really slowly, and some of them not at all.

~~~
navls
May have been a connectivity issue. Seemed to work ok on my chrome.

~~~
ics
No connectivity issue here, same (slow) in Chrome on OS X.

------
nateguchi
A few people did this back in the days of flash [1]

[1] [http://blog.soulwire.co.uk/wp-
content/uploads/2010/02/glitch...](http://blog.soulwire.co.uk/wp-
content/uploads/2010/02/glitchmap.swf)

(more info:
[http://blog.soulwire.co.uk/laboratory/flash/as3-bitmapdata-g...](http://blog.soulwire.co.uk/laboratory/flash/as3-bitmapdata-
glitch-generator))

~~~
nunull
Messing with that faders feels weirdly good.

------
TazeTSchnitzel
I didn't know PNG was so simple. Encode scanlines in terms of each other, then
pass the whole thing to DEFLATE... and it is effective. That's very elegant.

~~~
jdpage
PNG is one of my favourite file formats. It also has a chunking system which
lets you store extra data, which isn't really covered here. Definitely worth
looking into if you're interested in binary format design.

The other image format that's worth knowing is TIFF, because it's an insanely
simple and flexible format to write out if you don't have access to a library.
It lets you re-order the image data by tiles or scan lines, and lets you put
various tables almost anywhere in the file, which makes it great for
outputting large images from a parallel renderer: you can chop it up however
is suitable for the algorithm, write out the parts to disk as you get them,
and then write out a table at the end describing the order at the end of the
file, without having to seek.

~~~
gus_massa
In particular PNG may have a chunk that store the gamma information of the
file, that is important to correct the differences between the standard
configuration of PC and Mac monitors. This value is difficult to find, and
some programs ignore it and other use it. I had a few nightmares cases trying
to make a webpage look right until I realize that the PNG has a hidden gamma
value.

More details of a similar case: [http://morris-
photographics.com/photoshop/articles/png-gamma...](http://morris-
photographics.com/photoshop/articles/png-gamma.html)

~~~
userbinator
The gamma chunk can also be used for some interesting tricks:

[https://news.ycombinator.com/item?id=10192413](https://news.ycombinator.com/item?id=10192413)

------
fitzwatermellow
While it's definitely not my cup of tea, Glitch Art certainly has a long and
twisted history ;)

[http://kernelmag.dailydot.com/issue-sections/features-
issue-...](http://kernelmag.dailydot.com/issue-sections/features-issue-
sections/12265/glitch-art-history/)

------
mfkp
Related - online tool to play with JPG glitching:
[https://snorpey.github.io/jpg-glitch/](https://snorpey.github.io/jpg-glitch/)

------
spin
I am reminded of the story behind the cover art for the soundtrack to "The
Social Network": [http://www.rob-sheridan.com/TSN/](http://www.rob-
sheridan.com/TSN/)

------
bhauer
My favorite is Figure 17, the alpha glitched image with "PNG" written four
times.

~~~
madez
I also stopped at that figure and thought “I really like this one — I get it”,
which surprised me because the other examples looked too trivial to appreciate
or too messy for me to understand.

------
octatoan
How was the header image made?

------
psychobabble
Whhhyyyyyy ?

------
imaginenore
Ironically, the big header image is JPG

[http://ucnv.github.io/pnglitch/files/header.jpg](http://ucnv.github.io/pnglitch/files/header.jpg)

------
itistoday2
I think the very last one is my favorite. It has an interesting mix of
modified and unmodified components.

------
wonkaWonka
The interesting part about PNG is that since it uses the DEFLATE algorithm,
and only applies compression per row/line, with no awareness of relationships
between lines, is that in most cases, the effective compression is nearly the
same as if you took apart each row of uncompressed pixels as an individual
image, and took all those separate images and put them in a zip file.

Disregarding the cruft of headers and other file format overhead, there would
be a direct relationship between the size of a PNG image, and the raw,
uncompressed row-level data zipped up and handled in a similar way.

~~~
jsnell
I don't think that's true.

So first of all this whole article is essentially about the "filters" that are
used to encode the data more efficiently based on the surrounding data, before
it's passed to deflate. Basically compute some kind of a delta from the pixel
compared to the pixel up of it, left of it, or a combination of the two, with
the deltas generally compressing better than the raw data would have. That's
already a big difference to compressing each row separately.

Second, the deflate compression does not happen individually for each row. It
happens individually for each IDAT block, but those blocks can be of fairly
arbitrary size. Using a separate IDAT block for each row would seem very odd.
(Can you even use filters in that case, or does the state reset with each new
block?)

